feat: Phase 3 — PostgreSQL provider#54
Conversation
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (25)
📝 WalkthroughWalkthroughAdds a PostgreSQL provider and integrates it into the CLI: provider implementation, Debian unpacking, sandbox creation/start/replica support, provider-aware CLI flags and commands (including Changes
Sequence Diagram(s)sequenceDiagram
participant User as User/CLI
participant Cmd as CLI Command
participant Reg as providers.Registry
participant Provider as PostgreSQL Provider
participant FSys as File System
participant PG as postgres/initdb/pg_basebackup
User->>Cmd: run deploy single/replication --provider postgresql <version>
Cmd->>Reg: Get("postgresql")
Reg-->>Cmd: provider instance
Cmd->>Provider: ValidateVersion(version)
Provider-->>Cmd: ok
Cmd->>Provider: FindBinary(version)
Provider->>FSys: stat ~/opt/postgresql/<version>/bin/postgres
FSys-->>Provider: exists
Cmd->>Provider: CreateSandbox(config)
Provider->>FSys: initdb, write postgresql.conf/pg_hba.conf, write scripts
Provider->>PG: run initdb (and pg_ctl start if requested)
PG-->>Provider: initialized / running
Cmd->>Provider: (for replication) CreateReplica(primary, config)
Provider->>PG: pg_basebackup -> copy data -> modify configs -> start replica
Provider-->>Cmd: SandboxInfo(dir,port,status)
Cmd-->>User: print deployed sandbox dir and port
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related issues
Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly expands Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
There was a problem hiding this comment.
Pull request overview
Implements Phase 3 of the provider architecture by adding a PostgreSQL provider (including deb-based binary unpacking, sandbox lifecycle, and streaming replication) and extending ProxySQL wiring so ProxySQL can be deployed against PostgreSQL backends. It also introduces a --provider flag to route deploy single/multiple/replication through the appropriate provider implementation and adds a standalone deploy postgresql command.
Changes:
- Extend the
providers.Providerinterface with topology capability discovery and replica creation hooks. - Add a full PostgreSQL provider implementation (unpack from
.deb, sandbox lifecycle scripts, replication viapg_basebackup, unit + integration tests). - Update cmd + ProxySQL integration to support
--provider=postgresqland generatepgsql_*ProxySQL config blocks.
Reviewed changes
Copilot reviewed 27 out of 28 changed files in this pull request and generated 10 comments.
Show a summary per file
| File | Description |
|---|---|
| sandbox/proxysql_topology.go | Pass backend provider into ProxySQL topology deploy config. |
| providers/proxysql/proxysql.go | Add BackendProvider wiring + psql-based use_proxy for PostgreSQL; implement new Provider interface methods. |
| providers/proxysql/config.go | Generate pgsql_servers/pgsql_users/pgsql_variables when backend is PostgreSQL. |
| providers/proxysql/config_test.go | Add coverage for MySQL vs PostgreSQL config generation. |
| providers/provider.go | Extend Provider interface; add ErrNotSupported, ContainsString, CompatibleAddons. |
| providers/provider_test.go | Update mock provider + tests for new interface helpers and addon compatibility. |
| providers/postgresql/unpack.go | Implement .deb extraction into ~/opt/postgresql/<version>/.... |
| providers/postgresql/unpack_test.go | Unit tests for deb parsing/classification + required binaries list. |
| providers/postgresql/config.go | Generate postgresql.conf and pg_hba.conf (with replication toggles). |
| providers/postgresql/config_test.go | Unit tests for config generation. |
| providers/postgresql/scripts.go | Generate lifecycle + replication-check scripts for PostgreSQL sandboxes. |
| providers/postgresql/sandbox.go | Implement PostgreSQL CreateSandbox via initdb + config/script writing. |
| providers/postgresql/replication.go | Implement CreateReplica via pg_basebackup + config rewrite + start. |
| providers/postgresql/postgresql.go | Core PostgreSQL provider (name/version validation/ports/find/start/stop/register). |
| providers/postgresql/postgresql_test.go | Unit tests for provider behavior and script generation. |
| providers/postgresql/integration_test.go | Build-tagged integration tests for single + replication workflows. |
| providers/mysql/mysql.go | Implement new Provider interface methods for MySQL provider. |
| globals/globals.go | Add globals for provider selection flag defaulting to MySQL. |
| cmd/root.go | Register PostgreSQL provider in the default registry. |
| cmd/single.go | Add --provider routing; integrate ProxySQL deployment with backend provider selection. |
| cmd/multiple.go | Add --provider routing for multiple topology (non-MySQL path). |
| cmd/replication.go | Add --provider routing + PostgreSQL replication path + monitoring scripts + ProxySQL addon integration. |
| cmd/unpack.go | Add --provider=postgresql unpack flow using deb extraction; loosen arg validation. |
| cmd/deploy_postgresql.go | Add standalone dbdeployer deploy postgresql <version> command. |
| cmd/export_test.go | Update export/import command enumeration expectations for new subcommand. |
| docs/superpowers/specs/2026-03-24-phase3-postgresql-provider-design.md | Design spec for PostgreSQL provider + cmd/proxysql integration. |
| docs/superpowers/plans/2026-03-24-phase3-postgresql-provider.md | Detailed implementation plan and task breakdown. |
| .gitignore | Ignore .worktrees/. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| if common.DirExists(topologyDir) { | ||
| common.Exitf(1, "sandbox directory %s already exists", topologyDir) | ||
| } | ||
| os.MkdirAll(topologyDir, 0755) |
There was a problem hiding this comment.
os.MkdirAll(topologyDir, 0755) is called without checking the returned error. If directory creation fails, subsequent sandbox creation will fail with less clear errors. Please handle the error and abort early.
| os.MkdirAll(topologyDir, 0755) | |
| if err := os.MkdirAll(topologyDir, 0755); err != nil { | |
| common.Exitf(1, "error creating sandbox directory %s: %s", topologyDir, err) | |
| } |
| Use: "unpack MySQL-tarball", | ||
| Args: cobra.ExactArgs(1), | ||
| Args: cobra.MinimumNArgs(1), | ||
| Aliases: []string{"extract", "untar", "unzip", "inflate", "expand"}, |
There was a problem hiding this comment.
Changing Args to cobra.MinimumNArgs(1) allows extra args for the default MySQL unpack path, but unpackTarball only uses args[0] (silently ignoring the rest). This can hide user mistakes. Consider validating len(args)==1 when --provider=mysql (or implement a custom Args function that enforces different arg counts per provider).
| skipStart, _ := flags.GetBool(globals.SkipStartLabel) | ||
| if !skipStart { | ||
| if err := p.StartSandbox(primaryDir); err != nil { | ||
| common.Exitf(1, "error starting primary: %s", err) | ||
| } | ||
| } |
There was a problem hiding this comment.
In the non-MySQL replication path, --skip-start skips starting the primary (lines 90-95) but the code still proceeds to create replicas via CreateReplica(...), which requires a running primary for pg_basebackup. Either disallow --skip-start for non-MySQL replication or always start the primary before creating replicas.
| if common.DirExists(topologyDir) { | ||
| common.Exitf(1, "sandbox directory %s already exists", topologyDir) | ||
| } | ||
| os.MkdirAll(topologyDir, 0755) |
There was a problem hiding this comment.
os.MkdirAll(topologyDir, 0755) is called without checking the returned error. If directory creation fails (permissions, disk full), later steps will fail with less clear errors. Please handle the error and abort early.
| os.MkdirAll(topologyDir, 0755) | |
| if err := os.MkdirAll(topologyDir, 0755); err != nil { | |
| common.Exitf(1, "error creating topology directory %s: %s", topologyDir, err) | |
| } |
| Use: "unpack MySQL-tarball", | ||
| Args: cobra.ExactArgs(1), | ||
| Args: cobra.MinimumNArgs(1), | ||
| Aliases: []string{"extract", "untar", "unzip", "inflate", "expand"}, | ||
| Short: "unpack a tarball into the binary directory", | ||
| Long: `If you want to create a sandbox from a tarball (.tar.gz or .tar.xz), you first need to unpack it |
There was a problem hiding this comment.
The unpack command now supports --provider=postgresql with multiple .deb args, but Use/Short (and the Long description below) still describe only “MySQL-tarball” unpacking. This makes dbdeployer unpack --help misleading for the PostgreSQL flow; please update the help text (and ideally Use:) to mention the provider flag and .deb usage.
| os.WriteFile(path.Join(topologyDir, "check_replication"), []byte(checkReplScript), 0755) | ||
|
|
||
| checkRecovScript := postgresql.GenerateCheckRecoveryScript(scriptOpts, replicaPorts) | ||
| os.WriteFile(path.Join(topologyDir, "check_recovery"), []byte(checkRecovScript), 0755) |
There was a problem hiding this comment.
Errors from os.WriteFile when generating check_replication / check_recovery are ignored. If these writes fail, the command will report success but the topology directory will be missing critical scripts. Please check the returned errors (and fail the command if needed).
| os.WriteFile(path.Join(topologyDir, "check_replication"), []byte(checkReplScript), 0755) | |
| checkRecovScript := postgresql.GenerateCheckRecoveryScript(scriptOpts, replicaPorts) | |
| os.WriteFile(path.Join(topologyDir, "check_recovery"), []byte(checkRecovScript), 0755) | |
| if err := os.WriteFile(path.Join(topologyDir, "check_replication"), []byte(checkReplScript), 0755); err != nil { | |
| common.Exitf(1, "error writing check_replication script: %s", err) | |
| } | |
| checkRecovScript := postgresql.GenerateCheckRecoveryScript(scriptOpts, replicaPorts) | |
| if err := os.WriteFile(path.Join(topologyDir, "check_recovery"), []byte(checkRecovScript), 0755); err != nil { | |
| common.Exitf(1, "error writing check_recovery script: %s", err) | |
| } |
|
|
||
| fmt.Printf(" Primary deployed in %s (port: %d)\n", primaryDir, primaryPort) | ||
|
|
||
| primaryInfo := providers.SandboxInfo{Dir: primaryDir, Port: primaryPort, Status: "running"} |
There was a problem hiding this comment.
primaryInfo is constructed with Status: "running" unconditionally, even if --skip-start was used and the primary wasn’t started. This can mislead provider implementations / future logic that relies on Status. Set Status based on whether the primary was actually started (or omit it if unused).
| primaryInfo := providers.SandboxInfo{Dir: primaryDir, Port: primaryPort, Status: "running"} | |
| primaryStatus := "" | |
| if !skipStart { | |
| primaryStatus = "running" | |
| } | |
| primaryInfo := providers.SandboxInfo{Dir: primaryDir, Port: primaryPort, Status: primaryStatus} |
| "monitor_user": "msandbox", | ||
| "monitor_password": "msandbox", | ||
| "backends": strings.Join(backendParts, ","), | ||
| "backend_provider": backendProvider, | ||
| }, |
There was a problem hiding this comment.
When backend_provider is "postgresql", monitor_user/monitor_password are still hardcoded to "msandbox" here. The PostgreSQL sandboxes created by this PR initialize only the "postgres" role, so ProxySQL will be configured with a user that doesn’t exist on the backend and use_proxy will try to connect as that user. Please branch on backendProvider and use a PostgreSQL-appropriate user (e.g. "postgres")/auth settings.
| libDir := filepath.Join(basedir, "lib") | ||
| dataDir := filepath.Join(config.Dir, "data") | ||
| logFile := filepath.Join(config.Dir, "postgresql.log") | ||
|
|
There was a problem hiding this comment.
CreateReplica builds dataDir := filepath.Join(config.Dir, "data") but doesn’t ensure config.Dir exists before invoking pg_basebackup -D dataDir. In the cmd layer, replicaDir isn’t created ahead of time, so pg_basebackup is likely to fail because the parent directory doesn’t exist. Create config.Dir (or filepath.Dir(dataDir)) with os.MkdirAll before running pg_basebackup.
| // Ensure the sandbox directory exists before running pg_basebackup | |
| if err := os.MkdirAll(config.Dir, 0755); err != nil { | |
| return nil, fmt.Errorf("creating sandbox directory %s: %w", config.Dir, err) | |
| } |
| scripts["use_proxy"] = fmt.Sprintf("#!/bin/bash\npsql -h %s -p %d -U %s \"$@\"\n", | ||
| host, mysqlPort, monitorUser) |
There was a problem hiding this comment.
For backend_provider == "postgresql", the generated use_proxy script invokes psql without a non-interactive way to provide the password (psql will prompt). If ProxySQL is configured with a password in pgsql_users, this script can hang or fail in automation. Consider exporting PGPASSWORD (or documenting .pgpass usage) to match how credentials are handled in the MySQL use_proxy script.
| scripts["use_proxy"] = fmt.Sprintf("#!/bin/bash\npsql -h %s -p %d -U %s \"$@\"\n", | |
| host, mysqlPort, monitorUser) | |
| scripts["use_proxy"] = fmt.Sprintf("#!/bin/bash\nexport PGPASSWORD=%q\npsql -h %s -p %d -U %s \"$@\"\n", | |
| monitorPass, host, mysqlPort, monitorUser) |
There was a problem hiding this comment.
Code Review
This pull request introduces a comprehensive PostgreSQL provider to dbdeployer, enabling the deployment of single, multiple, and replication PostgreSQL sandboxes. It includes functionality for unpacking PostgreSQL .deb packages and integrates PostgreSQL as a backend for ProxySQL. The provider interface was extended to support topology capabilities and replica creation. Key areas for improvement highlighted in the review include enhancing the error handling for common.FindFreePort across various commands to prevent deployment failures when a free port cannot be found. Additionally, it's suggested to make the DbUser configurable for PostgreSQL sandboxes instead of hardcoding 'postgres', and to refactor basedir resolution in cmd/replication.go to reuse existing PostgreSQL provider logic. Further, the hardcoded trust authentication in pg_hba.conf for sandboxes is noted as a potential security concern for future consideration, and the method of modifying postgresql.conf via string manipulation is identified as fragile, suggesting a more robust parsing approach.
| freePort, portErr := common.FindFreePort(port, []int{}, 1) | ||
| if portErr == nil { | ||
| port = freePort | ||
| } |
There was a problem hiding this comment.
The current error handling for common.FindFreePort is problematic. If FindFreePort returns an error (meaning it couldn't find a free port starting from the suggested one), the original port value is retained. This port might already be in use, leading to a subsequent failure when CreateSandbox attempts to bind to it. It would be more robust to either exit with an error if no free port can be found or implement a retry mechanism to find an alternative port.
| freePort, portErr := common.FindFreePort(port, []int{}, 1) | |
| if portErr == nil { | |
| port = freePort | |
| } | |
| freePort, portErr := common.FindFreePort(port, []int{}, 1) | |
| if portErr != nil { | |
| common.Exitf(1, "error finding a free port: %s", portErr) | |
| } | |
| port = freePort |
| freePort, err := common.FindFreePort(port, []int{}, 1) | ||
| if err == nil { | ||
| port = freePort | ||
| } |
There was a problem hiding this comment.
Similar to deploy_postgresql.go, the error handling for common.FindFreePort here is not robust. If FindFreePort fails to find a free port, the original port (which might be in use) is kept. This can lead to subsequent errors during sandbox creation. Consider exiting or implementing a more robust port allocation strategy if a free port cannot be found.
| freePort, err := common.FindFreePort(port, []int{}, 1) | |
| if err == nil { | |
| port = freePort | |
| } | |
| freePort, err := common.FindFreePort(port, []int{}, 1) | |
| if err != nil { | |
| common.Exitf(1, "error finding a free port for node %d: %s", i, err) | |
| } | |
| port = freePort |
| freePort, err := common.FindFreePort(replicaPort, []int{}, 1) | ||
| if err == nil { | ||
| replicaPort = freePort | ||
| } |
There was a problem hiding this comment.
Similar to other cmd files, the error handling for common.FindFreePort is not robust. If FindFreePort returns an error, the original replicaPort (which might be in use) is retained. This could lead to deployment failures for replicas. It's recommended to exit or handle this error more explicitly.
| freePort, err := common.FindFreePort(replicaPort, []int{}, 1) | |
| if err == nil { | |
| replicaPort = freePort | |
| } | |
| freePort, err := common.FindFreePort(replicaPort, []int{}, 1) | |
| if err != nil { | |
| common.Exitf(1, "error finding a free port for replica %d: %s", i, err) | |
| } | |
| replicaPort = freePort |
| freePort, portErr := common.FindFreePort(port, []int{}, p.DefaultPorts().PortsPerInstance) | ||
| if portErr == nil { | ||
| port = freePort | ||
| } |
There was a problem hiding this comment.
The error handling for common.FindFreePort is not robust. If FindFreePort returns an error, the original port (which might be in use) is kept. This can lead to sandbox creation failures. It's recommended to exit or handle this error more explicitly.
| freePort, portErr := common.FindFreePort(port, []int{}, p.DefaultPorts().PortsPerInstance) | |
| if portErr == nil { | |
| port = freePort | |
| } | |
| freePort, portErr := common.FindFreePort(port, []int{}, p.DefaultPorts().PortsPerInstance) | |
| if portErr != nil { | |
| common.Exitf(1, "error finding a free port: %s", portErr) | |
| } | |
| port = freePort |
| Dir: nodeDir, | ||
| Port: port, | ||
| Host: "127.0.0.1", | ||
| DbUser: "postgres", |
| Dir: replicaDir, | ||
| Port: replicaPort, | ||
| Host: "127.0.0.1", | ||
| DbUser: "postgres", |
| home, _ := os.UserHomeDir() | ||
| basedir := path.Join(home, "opt", "postgresql", version) | ||
| binDir := path.Join(basedir, "bin") | ||
| libDir := path.Join(basedir, "lib") |
There was a problem hiding this comment.
The logic to construct basedir, binDir, and libDir is duplicated here from the providers/postgresql/postgresql.go package's basedirFromVersion function. Reusing postgresql.basedirFromVersion(version) would improve maintainability and ensure consistency.
basedir, err := postgresql.BasedirFromVersion(version)
if err != nil {
common.Exitf(1, "error resolving PostgreSQL basedir: %s", err)
}
binDir := filepath.Join(basedir, "bin")
libDir := filepath.Join(basedir, "lib")| Dir: sandboxDir, | ||
| Port: port, | ||
| Host: "127.0.0.1", | ||
| DbUser: "postgres", |
| func GeneratePgHbaConf(replication bool) string { | ||
| var b strings.Builder | ||
| b.WriteString("# TYPE DATABASE USER ADDRESS METHOD\n") | ||
| b.WriteString("local all all trust\n") | ||
| b.WriteString("host all all 127.0.0.1/32 trust\n") | ||
| b.WriteString("host all all ::1/128 trust\n") | ||
|
|
||
| if replication { | ||
| b.WriteString("host replication all 127.0.0.1/32 trust\n") | ||
| } | ||
|
|
||
| return b.String() |
There was a problem hiding this comment.
The GeneratePgHbaConf function hardcodes trust authentication for all connections. While this simplifies setup for a sandbox, it's generally not a recommended security practice for any environment that might resemble production. For future enhancements, consider adding options to configure more secure authentication methods (e.g., md5, scram-sha-256) or restricting access to specific users/networks.
| conf := string(confBytes) | ||
| lines := strings.Split(conf, "\n") | ||
| var newLines []string | ||
| for _, line := range lines { | ||
| trimmed := strings.TrimSpace(line) | ||
| if strings.HasPrefix(trimmed, "port =") || strings.HasPrefix(trimmed, "port=") { | ||
| newLines = append(newLines, fmt.Sprintf("port = %d", config.Port)) | ||
| } else if strings.HasPrefix(trimmed, "unix_socket_directories =") || strings.HasPrefix(trimmed, "unix_socket_directories=") { | ||
| newLines = append(newLines, fmt.Sprintf("unix_socket_directories = '%s'", dataDir)) | ||
| } else { | ||
| newLines = append(newLines, line) | ||
| } | ||
| } |
There was a problem hiding this comment.
Modifying the postgresql.conf file by reading its content as a string, splitting it into lines, and then rejoining after string replacement can be fragile. This approach is susceptible to issues if the format of the configuration file changes, or if the target lines (port =, unix_socket_directories =) appear in comments or other unexpected contexts. A more robust solution might involve using a dedicated configuration parsing library or more precise regular expressions to ensure changes are applied correctly.
There was a problem hiding this comment.
Actionable comments posted: 14
🧹 Nitpick comments (8)
sandbox/proxysql_topology.go (1)
12-20: Update doc comments to reflect multi-backend support.The doc block still describes MySQL-only topology terminology (
master/slave), but the function now also handles PostgreSQL backends viabackendProvider.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sandbox/proxysql_topology.go` around lines 12 - 20, Update the function doc comment for DeployProxySQLForTopology to remove MySQL-only terminology and reflect multi-backend support: replace "master/slave" with neutral terms like "primary/replicas" (or "primary/replica ports"), mention that backendProvider selects the database backend (e.g., "mysql", "postgresql", or "" for mysql default), and update parameter descriptions (slavePorts -> replicaPorts or "replica ports (empty for single topology)") and proxysql/backend binding details so the comment accurately documents PostgreSQL support and the backendProvider parameter.providers/provider.go (1)
102-104: Consider hardeningCompatibleAddonsagainst external mutation.Because
CompatibleAddonsis exported and mutable, any package can alter compatibility rules at runtime. Prefer an unexported map with accessor(s) returning a copy.♻️ Optional hardening sketch
-var CompatibleAddons = map[string][]string{ +var compatibleAddons = map[string][]string{ "proxysql": {"mysql", "postgresql"}, } + +func CompatibleProviders(addon string) []string { + v := compatibleAddons[addon] + out := make([]string, len(v)) + copy(out, v) + return out +}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@providers/provider.go` around lines 102 - 104, The exported mutable map CompatibleAddons allows external packages to mutate compatibility rules; make the map unexported (rename to compatibleAddons) and add accessor(s) that return safe copies (e.g., GetCompatibleAddons() returning a deep-copied map[string][]string and/or GetCompatibleFor(addon string) returning a copied []string) so callers cannot modify the underlying data; update any references to use these accessor functions to preserve behavior.providers/postgresql/config_test.go (1)
60-62: Tighten the non-replication negative assertion to avoid brittle failures.Checking for generic
"replication"can fail on unrelated text changes. Assert absence of the specific rule token instead (e.g.,"host replication").♻️ Suggested refinement
- if strings.Contains(conf, "replication") { - t.Error("should not contain replication when replication is false") + if strings.Contains(conf, "host replication") { + t.Error("should not contain host replication rule when replication is false") }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@providers/postgresql/config_test.go` around lines 60 - 62, The negative assertion currently checks for the generic substring "replication" on the test variable conf which is brittle; update the check to assert absence of the specific rule token (e.g., "host replication" or a regex like `^host\s+replication`) instead of the generic word, replacing the strings.Contains(conf, "replication") condition used before calling t.Error so the test fails only when the actual replication host rule is present.providers/provider_test.go (1)
106-118: This test never exercises the real MySQL provider.
Line 108registersmockProvider{name: "mysql-like"}, soTestSupportedTopologiesMySQLwill still pass ifproviders/mysql/mysql.goadvertises the wrong topology set. Either rename the test or instantiate the concrete MySQL provider here.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@providers/provider_test.go` around lines 106 - 118, The test TestSupportedTopologiesMySQL registers a mockProvider (mockProvider{name: "mysql-like"}) so it never exercises the real MySQL implementation; replace the mock registration with the concrete MySQL provider from the providers/mysql package (e.g., instantiate the provider via its constructor such as mysql.NewProvider() or &mysql.Provider{} and Register that), or alternatively rename the test to reflect it uses a mock; update the Register/Get calls in TestSupportedTopologiesMySQL accordingly so SupportedTopologies() is invoked on the real MySQL provider defined in providers/mysql/mysql.go.cmd/multiple.go (1)
75-103: Consider cleanup on partial deployment failure.If node creation fails mid-way (e.g., node 3 of 5 fails), previously created nodes remain running/deployed. This could leave the system in an inconsistent state. Consider tracking created nodes and cleaning up on failure, or at minimum documenting this behavior.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cmd/multiple.go` around lines 75 - 103, When creating multiple nodes in the loop using CreateSandbox and StartSandbox, partial failures leave previously created nodes running; modify the logic in the for loop that iterates i (1..nodes) to track created nodeDirs (or node identifiers) and, on any error from p.CreateSandbox or p.StartSandbox, iterate that tracked list to stop/remove those sandboxes (using existing Stop/Remove sandbox functions or p.StopSandbox/p.RemoveSandbox equivalents) before exiting with common.Exitf; ensure the cleanup runs only for entries added to the tracked slice and preserve the original error in the Exitf call.providers/postgresql/replication.go (1)
60-60: Gosec G306 warning is a false positive here.PostgreSQL configuration files conventionally use
0644permissions, and this file contains no secrets (authentication istrust-based for local sandbox use). If you want to silence the linter, you can add a//#nosecG306directive, but the current permissions are appropriate.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@providers/postgresql/replication.go` at line 60, The Gosec G306 false positive refers to the os.WriteFile call using permission 0644 in replication.go; to silence the linter add a // `#nosec` G306 directive on the same line (or immediately above) the os.WriteFile(confPath, []byte(strings.Join(newLines, "\n")), 0644) invocation so the scanner ignores this intentional non-secret file permission while leaving the 0644 mode unchanged.providers/postgresql/integration_test.go (1)
16-30: Error handling in findPostgresVersion.Line 18 ignores the error from
os.UserHomeDir(). While unlikely to fail, it's better to handle it consistently.💡 Optional fix
func findPostgresVersion(t *testing.T) string { t.Helper() - home, _ := os.UserHomeDir() + home, err := os.UserHomeDir() + if err != nil { + t.Skipf("cannot determine home directory: %v", err) + } entries, err := os.ReadDir(filepath.Join(home, "opt", "postgresql"))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@providers/postgresql/integration_test.go` around lines 16 - 30, The function findPostgresVersion ignores the error from os.UserHomeDir(); update it to check the returned error (from os.UserHomeDir()) and handle it consistently (e.g., call t.Skipf("could not determine user home: %v", err) or similar) before using home, so you don't proceed with an empty home path; modify the existing logic around the home, err := os.UserHomeDir() call and keep the rest of the function (entries, os.ReadDir, t.Skipf on ReadDir error and t.Skip if no dirs) unchanged.providers/postgresql/unpack.go (1)
84-92: Silent skip when source directories don't exist.If the deb package structure changes or is malformed, missing source directories are silently skipped. Consider logging a warning or returning an error for critical directories like
srcBin.💡 Optional improvement to warn on missing directories
for _, c := range copies { if _, err := os.Stat(c.src); os.IsNotExist(err) { + // srcBin is required; others are optional + if c.src == srcBin { + return fmt.Errorf("required directory %s not found in deb package", c.src) + } continue }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@providers/postgresql/unpack.go` around lines 84 - 92, The loop over copies silently skips missing source directories (for _, c := range copies { if _, err := os.Stat(c.src); os.IsNotExist(err) { continue } ... }) which hides malformed packages; update this check to log a warning for non-critical missing sources and return an error for critical ones (e.g., when c.src corresponds to srcBin) instead of continuing silently—use the existing logger or fmt.Errorf to surface the problem so callers can fail fast on missing essential directories and only skip with a logged warning for optional paths.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@cmd/multiple.go`:
- Line 71: The call to os.MkdirAll(topologyDir, 0755) currently ignores its
error; update the code around that call to capture the returned error (err :=
os.MkdirAll(...)) and handle it appropriately — for example, return the error up
the call stack or log it and exit (using the surrounding command's logger or
fmt.Errorf/wrapping). Locate the os.MkdirAll invocation in cmd/multiple.go and
ensure the function (where the call appears) propagates or handles the error
instead of discarding it.
In `@cmd/replication.go`:
- Around line 33-35: The function deployReplicationNonMySQL dereferences args[0]
immediately which can panic when no version is provided; before accessing
args[0] perform the same argument validation used elsewhere (call or mirror
common.CheckOrigin(args) or explicit args length check) and return a proper CLI
error when args is empty; update deployReplicationNonMySQL to validate args
length (or call the shared validation helper) before using args[0], matching the
guard used in the other dispatch path to prevent panics.
- Line 71: The call to os.MkdirAll(topologyDir, 0755) (and other filesystem ops
around lines 146–149) currently ignores returned errors; update these to capture
and handle errors (e.g., err := os.MkdirAll(...); if err != nil { return
fmt.Errorf("creating topology dir %s: %w", topologyDir, err) } or log + exit) so
the command fails fast on permission/disk errors instead of proceeding with a
partial topology; do the same for any subsequent file writes or script
deployments referenced in this file (check the functions/blocks that write
monitoring scripts and any os.WriteFile/os.Open/Copy calls) and propagate or log
detailed errors to prevent silent partial success.
- Around line 68-71: The code creates topologyDir with os.MkdirAll but does not
remove it on subsequent failures, causing future runs to abort at
common.DirExists; fix by adding an automatic rollback that removes the
topologyDir and stops any partially created sandboxes whenever an error occurs
after os.MkdirAll. Specifically, after creating topologyDir (the os.MkdirAll
call) register a deferred cleanup path (or explicit error-handling path) that
calls os.RemoveAll(topologyDir) and stops/cleans any started sandboxes/ProxySQL
instances on error, and ensure all error-return paths in the primary startup,
replica creation, and ProxySQL deployment code invoke this cleanup; apply the
same pattern around the other mkdir/create blocks referenced (lines 86-94,
120-126, 157-159) so common.DirExists no longer blocks retries.
- Around line 90-100: The code currently allows --skip-start to prevent
p.StartSandbox(primaryDir) which breaks replica provisioning (replicas require a
live primary); change the logic in the block that reads skipStart
(flags.GetBool) so that if this code path is provisioning replicas you reject
--skip-start with a clear error (use common.Exitf) rather than short-circuiting,
or alternatively always call p.StartSandbox(primaryDir) during provisioning and
only stop the topology later; update the check around skipStart and
p.StartSandbox, and ensure primaryInfo.Status reflects the actual state (use
"running" only when StartSandbox succeeded).
In `@cmd/root.go`:
- Line 154: The call to postgresqlprovider.Register(providers.DefaultRegistry)
currently ignores its error; change it to capture and handle the returned error
(e.g., err := postgresqlprovider.Register(...)) and fail fast on error similar
to the MySQL registration path by logging or exiting (or returning the error
from init) so duplicate or failed registrations surface immediately; update the
logic around postgresqlprovider.Register and providers.DefaultRegistry to check
err and propagate/terminate accordingly.
In `@cmd/unpack.go`:
- Around line 43-66: The PostgreSQL branch of the unpack command ignores parsed
flags (basedir, overwrite, dryRun) and always writes to
$HOME/opt/postgresql/<version>; update the branch that handles providerName ==
"postgresql" to derive targetDir from the parsed basedir flag (falling back to
os.UserHomeDir()/opt/postgresql/<version> when basedir is empty), and pass the
overwrite and dryRun flags through to the unpack logic (either by calling an
UnpackDebs variant that accepts options or by wiring flags into
postgresql.UnpackDebs invocation) so postgresql.UnpackDebs (or its caller)
respects overwrite and dryRun instead of hard-coding the path or ignoring flags;
keep existing error handling around postgresql.ClassifyDebs,
postgresql.ParseDebVersion, Version, and postgresql.UnpackDebs.
In `@docs/superpowers/specs/2026-03-24-phase3-postgresql-provider-design.md`:
- Around line 328-346: The providers/postgresql file layout in the summary is
incorrect: restore the earlier filenames and map responsibilities to their
actual implementations — list providers/postgresql/sandbox.go (implements
CreateSandbox and sandbox lifecycle), providers/postgresql/replication.go
(implements CreateReplica and replication lifecycle), and
providers/postgresql/scripts.go (helper scripts and lifecycle hooks), and keep
postgresql.go for shared types/constructors only; update the summary to
reference the CreateSandbox and CreateReplica symbols and the lifecycle
responsibilities on sandbox.go, replication.go, and scripts.go so readers are
directed to the correct files.
- Around line 91-96: The fenced code blocks in the spec (e.g., the block showing
the PostgreSQL directory tree with triple backticks) lack language identifiers
and trigger markdownlint MD040; update each backtick fence to include an
appropriate language tag (for example use "text" or "conf" for the directory
tree snippet, "go" for Go examples, or "markdown" where relevant) for the blocks
at the shown snippet and the other affected ranges (137-150, 195-204, 269-298,
328-346) so they get proper syntax highlighting and satisfy markdownlint.
In `@providers/postgresql/postgresql.go`:
- Around line 22-37: ValidateVersion currently accepts negative minor/major
parts (e.g., "16.-1") and VersionToPort parses independently, causing drift; add
a single shared parser function (e.g., parsePostgresVersion or
ensureVersionParts) that both ValidateVersion and VersionToPort call, which
splits the string, ensures exactly two parts, converts both parts to integers,
and rejects negative values (returning an error if any part < 0), then update
ValidateVersion to use that parser for validation and update VersionToPort to
call the same parser so parsing/validation logic is centralized and consistent.
- Around line 66-76: Update FindBinary to validate all PostgreSQL tool binaries
required by this provider (not just "postgres") before returning success: check
for "initdb", "pg_ctl", "psql", "pg_basebackup" (in addition to "postgres") in
the same binPath derived in FindBinary, and return a clear error listing any
missing tools so CreateSandbox and CreateReplica won’t later fail; reference the
existing FindBinary function and the callers CreateSandbox/CreateReplica when
making the change so the validations cover the binaries those methods depend on.
In `@providers/postgresql/replication.go`:
- Around line 65-80: The generated "clear" script (created by GenerateScripts)
will wipe replication state on replicas; update the script generation and
writing logic so replicas don't get a destructive clear: modify GenerateScripts
(providers/postgresql/scripts.go) to accept a flag (e.g., IsReplica or Role) or
add a Replica-safe option, and then in replication.go when calling
GenerateScripts pass that flag or filter the returned scripts map to either (a)
remove the "clear" entry for replicas, or (b) replace "clear" with a
replica-safe variant that preserves standby.signal and postgresql.auto.conf (or
contains a prominent warning). Ensure the change references GenerateScripts and
the scripts map used in the for loop so the "clear" file is not written to
replica sandboxes.
In `@providers/proxysql/config.go`:
- Around line 40-54: The pgsql_variables block currently writes
cfg.MonitorUser/cfg.MonitorPass (same as MySQL) causing ProxySQL monitor auth to
fail for Postgres; modify the logic in providers/proxysql/config.go (the isPgsql
branch that builds "pgsql_variables") to use a Postgres-specific monitor
credential (e.g., cfg.PGMonitorUser/cfg.PGMonitorPass or fallback to "postgres"
and its password) instead of cfg.MonitorUser/cfg.MonitorPass so the monitor role
actually exists, and update any references that write
monitor_username/monitor_password for pgsql_variables accordingly (leave
mysql_variables unchanged).
In `@providers/proxysql/proxysql.go`:
- Around line 115-120: The PostgreSQL `use_proxy` wrapper currently omits
`monitorPass` causing psql to prompt interactively; update the
scripts["use_proxy"] generation to pass the monitor password non-interactively
(e.g., export or prefix PGPASSWORD with monitorPass) and invoke psql with the
same args (host, mysqlPort, monitorUser, "$@") so it mirrors the MySQL helper
behavior; locate the block that builds scripts["use_proxy"] and include
monitorPass (variable monitorPass) in the wrapper in a safe-quoted way to avoid
prompting.
---
Nitpick comments:
In `@cmd/multiple.go`:
- Around line 75-103: When creating multiple nodes in the loop using
CreateSandbox and StartSandbox, partial failures leave previously created nodes
running; modify the logic in the for loop that iterates i (1..nodes) to track
created nodeDirs (or node identifiers) and, on any error from p.CreateSandbox or
p.StartSandbox, iterate that tracked list to stop/remove those sandboxes (using
existing Stop/Remove sandbox functions or p.StopSandbox/p.RemoveSandbox
equivalents) before exiting with common.Exitf; ensure the cleanup runs only for
entries added to the tracked slice and preserve the original error in the Exitf
call.
In `@providers/postgresql/config_test.go`:
- Around line 60-62: The negative assertion currently checks for the generic
substring "replication" on the test variable conf which is brittle; update the
check to assert absence of the specific rule token (e.g., "host replication"
or a regex like `^host\s+replication`) instead of the generic word, replacing
the strings.Contains(conf, "replication") condition used before calling t.Error
so the test fails only when the actual replication host rule is present.
In `@providers/postgresql/integration_test.go`:
- Around line 16-30: The function findPostgresVersion ignores the error from
os.UserHomeDir(); update it to check the returned error (from os.UserHomeDir())
and handle it consistently (e.g., call t.Skipf("could not determine user home:
%v", err) or similar) before using home, so you don't proceed with an empty home
path; modify the existing logic around the home, err := os.UserHomeDir() call
and keep the rest of the function (entries, os.ReadDir, t.Skipf on ReadDir error
and t.Skip if no dirs) unchanged.
In `@providers/postgresql/replication.go`:
- Line 60: The Gosec G306 false positive refers to the os.WriteFile call using
permission 0644 in replication.go; to silence the linter add a // `#nosec` G306
directive on the same line (or immediately above) the os.WriteFile(confPath,
[]byte(strings.Join(newLines, "\n")), 0644) invocation so the scanner ignores
this intentional non-secret file permission while leaving the 0644 mode
unchanged.
In `@providers/postgresql/unpack.go`:
- Around line 84-92: The loop over copies silently skips missing source
directories (for _, c := range copies { if _, err := os.Stat(c.src);
os.IsNotExist(err) { continue } ... }) which hides malformed packages; update
this check to log a warning for non-critical missing sources and return an error
for critical ones (e.g., when c.src corresponds to srcBin) instead of continuing
silently—use the existing logger or fmt.Errorf to surface the problem so callers
can fail fast on missing essential directories and only skip with a logged
warning for optional paths.
In `@providers/provider_test.go`:
- Around line 106-118: The test TestSupportedTopologiesMySQL registers a
mockProvider (mockProvider{name: "mysql-like"}) so it never exercises the real
MySQL implementation; replace the mock registration with the concrete MySQL
provider from the providers/mysql package (e.g., instantiate the provider via
its constructor such as mysql.NewProvider() or &mysql.Provider{} and Register
that), or alternatively rename the test to reflect it uses a mock; update the
Register/Get calls in TestSupportedTopologiesMySQL accordingly so
SupportedTopologies() is invoked on the real MySQL provider defined in
providers/mysql/mysql.go.
In `@providers/provider.go`:
- Around line 102-104: The exported mutable map CompatibleAddons allows external
packages to mutate compatibility rules; make the map unexported (rename to
compatibleAddons) and add accessor(s) that return safe copies (e.g.,
GetCompatibleAddons() returning a deep-copied map[string][]string and/or
GetCompatibleFor(addon string) returning a copied []string) so callers cannot
modify the underlying data; update any references to use these accessor
functions to preserve behavior.
In `@sandbox/proxysql_topology.go`:
- Around line 12-20: Update the function doc comment for
DeployProxySQLForTopology to remove MySQL-only terminology and reflect
multi-backend support: replace "master/slave" with neutral terms like
"primary/replicas" (or "primary/replica ports"), mention that backendProvider
selects the database backend (e.g., "mysql", "postgresql", or "" for mysql
default), and update parameter descriptions (slavePorts -> replicaPorts or
"replica ports (empty for single topology)") and proxysql/backend binding
details so the comment accurately documents PostgreSQL support and the
backendProvider parameter.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: be619bc9-708d-45f5-93fc-8b13eadce325
📒 Files selected for processing (28)
.gitignorecmd/deploy_postgresql.gocmd/export_test.gocmd/multiple.gocmd/replication.gocmd/root.gocmd/single.gocmd/unpack.godocs/superpowers/plans/2026-03-24-phase3-postgresql-provider.mddocs/superpowers/specs/2026-03-24-phase3-postgresql-provider-design.mdglobals/globals.goproviders/mysql/mysql.goproviders/postgresql/config.goproviders/postgresql/config_test.goproviders/postgresql/integration_test.goproviders/postgresql/postgresql.goproviders/postgresql/postgresql_test.goproviders/postgresql/replication.goproviders/postgresql/sandbox.goproviders/postgresql/scripts.goproviders/postgresql/unpack.goproviders/postgresql/unpack_test.goproviders/provider.goproviders/provider_test.goproviders/proxysql/config.goproviders/proxysql/config_test.goproviders/proxysql/proxysql.gosandbox/proxysql_topology.go
| if common.DirExists(topologyDir) { | ||
| common.Exitf(1, "sandbox directory %s already exists", topologyDir) | ||
| } | ||
| os.MkdirAll(topologyDir, 0755) |
There was a problem hiding this comment.
Unchecked error from os.MkdirAll causes CI failure.
The pipeline failure confirms this issue. The error return value must be checked.
🐛 Proposed fix
- os.MkdirAll(topologyDir, 0755)
+ if err := os.MkdirAll(topologyDir, 0755); err != nil {
+ common.Exitf(1, "error creating topology directory %s: %s", topologyDir, err)
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| os.MkdirAll(topologyDir, 0755) | |
| if err := os.MkdirAll(topologyDir, 0755); err != nil { | |
| common.Exitf(1, "error creating topology directory %s: %s", topologyDir, err) | |
| } |
🧰 Tools
🪛 GitHub Actions: CI
[error] 71-71: golangci-lint: Error return value of os.MkdirAll is not checked (errcheck)
🪛 GitHub Check: Lint
[failure] 71-71:
Error return value of os.MkdirAll is not checked (errcheck)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cmd/multiple.go` at line 71, The call to os.MkdirAll(topologyDir, 0755)
currently ignores its error; update the code around that call to capture the
returned error (err := os.MkdirAll(...)) and handle it appropriately — for
example, return the error up the call stack or log it and exit (using the
surrounding command's logger or fmt.Errorf/wrapping). Locate the os.MkdirAll
invocation in cmd/multiple.go and ensure the function (where the call appears)
propagates or handles the error instead of discarding it.
| func deployReplicationNonMySQL(cmd *cobra.Command, args []string, providerName string) { | ||
| flags := cmd.Flags() | ||
| version := args[0] |
There was a problem hiding this comment.
Validate args before dispatching to the non-MySQL path.
Lines 171-173 return before the existing common.CheckOrigin(args) on Line 178, but deployReplicationNonMySQL immediately dereferences args[0] on Line 35. dbdeployer deploy replication --provider=postgresql without a version will panic instead of surfacing a normal CLI error.
🔧 Suggested fix
func replicationSandbox(cmd *cobra.Command, args []string) {
flags := cmd.Flags()
+ common.CheckOrigin(args)
providerName, _ := flags.GetString(globals.ProviderLabel)
if providerName != "mysql" {
deployReplicationNonMySQL(cmd, args, providerName)
return
}
-
- common.CheckOrigin(args)Also applies to: 167-173
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cmd/replication.go` around lines 33 - 35, The function
deployReplicationNonMySQL dereferences args[0] immediately which can panic when
no version is provided; before accessing args[0] perform the same argument
validation used elsewhere (call or mirror common.CheckOrigin(args) or explicit
args length check) and return a proper CLI error when args is empty; update
deployReplicationNonMySQL to validate args length (or call the shared validation
helper) before using args[0], matching the guard used in the other dispatch path
to prevent panics.
| if common.DirExists(topologyDir) { | ||
| common.Exitf(1, "sandbox directory %s already exists", topologyDir) | ||
| } | ||
| os.MkdirAll(topologyDir, 0755) |
There was a problem hiding this comment.
Roll back the topology on any failure after the directory is created.
Once Line 71 succeeds, failures in primary startup, replica creation, or ProxySQL deployment leave topologyDir behind. The next attempt then fails on Line 68's existing-dir guard, and the replica-failure path only stops sandboxes without removing them.
Also applies to: 86-94, 120-126, 157-159
🧰 Tools
🪛 GitHub Check: Lint
[failure] 71-71:
Error return value of os.MkdirAll is not checked (errcheck)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cmd/replication.go` around lines 68 - 71, The code creates topologyDir with
os.MkdirAll but does not remove it on subsequent failures, causing future runs
to abort at common.DirExists; fix by adding an automatic rollback that removes
the topologyDir and stops any partially created sandboxes whenever an error
occurs after os.MkdirAll. Specifically, after creating topologyDir (the
os.MkdirAll call) register a deferred cleanup path (or explicit error-handling
path) that calls os.RemoveAll(topologyDir) and stops/cleans any started
sandboxes/ProxySQL instances on error, and ensure all error-return paths in the
primary startup, replica creation, and ProxySQL deployment code invoke this
cleanup; apply the same pattern around the other mkdir/create blocks referenced
(lines 86-94, 120-126, 157-159) so common.DirExists no longer blocks retries.
| if common.DirExists(topologyDir) { | ||
| common.Exitf(1, "sandbox directory %s already exists", topologyDir) | ||
| } | ||
| os.MkdirAll(topologyDir, 0755) |
There was a problem hiding this comment.
Check filesystem writes before reporting success.
Lines 71, 146, and 149 ignore filesystem errors. A permission or disk failure here can leave the command continuing with a half-created topology or a deployed topology missing its monitoring scripts.
Also applies to: 146-149
🧰 Tools
🪛 GitHub Check: Lint
[failure] 71-71:
Error return value of os.MkdirAll is not checked (errcheck)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cmd/replication.go` at line 71, The call to os.MkdirAll(topologyDir, 0755)
(and other filesystem ops around lines 146–149) currently ignores returned
errors; update these to capture and handle errors (e.g., err :=
os.MkdirAll(...); if err != nil { return fmt.Errorf("creating topology dir %s:
%w", topologyDir, err) } or log + exit) so the command fails fast on
permission/disk errors instead of proceeding with a partial topology; do the
same for any subsequent file writes or script deployments referenced in this
file (check the functions/blocks that write monitoring scripts and any
os.WriteFile/os.Open/Copy calls) and propagate or log detailed errors to prevent
silent partial success.
| skipStart, _ := flags.GetBool(globals.SkipStartLabel) | ||
| if !skipStart { | ||
| if err := p.StartSandbox(primaryDir); err != nil { | ||
| common.Exitf(1, "error starting primary: %s", err) | ||
| } | ||
| } | ||
|
|
||
| fmt.Printf(" Primary deployed in %s (port: %d)\n", primaryDir, primaryPort) | ||
|
|
||
| primaryInfo := providers.SandboxInfo{Dir: primaryDir, Port: primaryPort, Status: "running"} | ||
|
|
There was a problem hiding this comment.
Don't let --skip-start short-circuit replica provisioning.
PostgreSQL replicas are cloned from a live primary, so Line 120 cannot work if Lines 91-95 intentionally keep the primary stopped. Line 99 also still marks the primary as running. Either reject --skip-start on this path or provision first and stop the topology at the end.
🔧 Minimal safe fix
skipStart, _ := flags.GetBool(globals.SkipStartLabel)
- if !skipStart {
- if err := p.StartSandbox(primaryDir); err != nil {
- common.Exitf(1, "error starting primary: %s", err)
- }
+ if skipStart {
+ common.Exit(1, "--skip-start is not supported with --provider=postgresql replication")
+ }
+ if err := p.StartSandbox(primaryDir); err != nil {
+ common.Exitf(1, "error starting primary: %s", err)
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| skipStart, _ := flags.GetBool(globals.SkipStartLabel) | |
| if !skipStart { | |
| if err := p.StartSandbox(primaryDir); err != nil { | |
| common.Exitf(1, "error starting primary: %s", err) | |
| } | |
| } | |
| fmt.Printf(" Primary deployed in %s (port: %d)\n", primaryDir, primaryPort) | |
| primaryInfo := providers.SandboxInfo{Dir: primaryDir, Port: primaryPort, Status: "running"} | |
| skipStart, _ := flags.GetBool(globals.SkipStartLabel) | |
| if skipStart { | |
| common.Exit(1, "--skip-start is not supported with --provider=postgresql replication") | |
| } | |
| if err := p.StartSandbox(primaryDir); err != nil { | |
| common.Exitf(1, "error starting primary: %s", err) | |
| } | |
| fmt.Printf(" Primary deployed in %s (port: %d)\n", primaryDir, primaryPort) | |
| primaryInfo := providers.SandboxInfo{Dir: primaryDir, Port: primaryPort, Status: "running"} |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cmd/replication.go` around lines 90 - 100, The code currently allows
--skip-start to prevent p.StartSandbox(primaryDir) which breaks replica
provisioning (replicas require a live primary); change the logic in the block
that reads skipStart (flags.GetBool) so that if this code path is provisioning
replicas you reject --skip-start with a clear error (use common.Exitf) rather
than short-circuiting, or alternatively always call p.StartSandbox(primaryDir)
during provisioning and only stop the topology later; update the check around
skipStart and p.StartSandbox, and ensure primaryInfo.Status reflects the actual
state (use "running" only when StartSandbox succeeded).
| func (p *PostgreSQLProvider) ValidateVersion(version string) error { | ||
| parts := strings.Split(version, ".") | ||
| if len(parts) != 2 { | ||
| return fmt.Errorf("invalid PostgreSQL version format: %q (expected major.minor, e.g. 16.13)", version) | ||
| } | ||
| major, err := strconv.Atoi(parts[0]) | ||
| if err != nil { | ||
| return fmt.Errorf("invalid PostgreSQL major version %q: %w", parts[0], err) | ||
| } | ||
| if major < 12 { | ||
| return fmt.Errorf("PostgreSQL major version must be >= 12, got %d", major) | ||
| } | ||
| if _, err := strconv.Atoi(parts[1]); err != nil { | ||
| return fmt.Errorf("invalid PostgreSQL minor version %q: %w", parts[1], err) | ||
| } | ||
| return nil |
There was a problem hiding this comment.
Reject negative version parts and share one parser.
16.-1 is not a real PostgreSQL version, but the current validation accepts it and VersionToPort() still derives a port from it. Tighten both components to non-negative integers and funnel ValidateVersion()/VersionToPort() through the same parser so they cannot drift.
Also applies to: 50-63
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@providers/postgresql/postgresql.go` around lines 22 - 37, ValidateVersion
currently accepts negative minor/major parts (e.g., "16.-1") and VersionToPort
parses independently, causing drift; add a single shared parser function (e.g.,
parsePostgresVersion or ensureVersionParts) that both ValidateVersion and
VersionToPort call, which splits the string, ensures exactly two parts, converts
both parts to integers, and rejects negative values (returning an error if any
part < 0), then update ValidateVersion to use that parser for validation and
update VersionToPort to call the same parser so parsing/validation logic is
centralized and consistent.
| func (p *PostgreSQLProvider) FindBinary(version string) (string, error) { | ||
| home, err := os.UserHomeDir() | ||
| if err != nil { | ||
| return "", fmt.Errorf("cannot determine home directory: %w", err) | ||
| } | ||
| binPath := filepath.Join(home, "opt", "postgresql", version, "bin", "postgres") | ||
| if _, err := os.Stat(binPath); err != nil { | ||
| return "", fmt.Errorf("PostgreSQL binary not found at %s: %w", binPath, err) | ||
| } | ||
| return binPath, nil | ||
| } |
There was a problem hiding this comment.
FindBinary should validate the tools this provider actually needs.
CreateSandbox() and CreateReplica() also depend on initdb, pg_ctl, psql, and pg_basebackup. A partial unpack can pass Line 72 today and only fail later after deployment has already started.
🔧 Suggested fix
func (p *PostgreSQLProvider) FindBinary(version string) (string, error) {
home, err := os.UserHomeDir()
if err != nil {
return "", fmt.Errorf("cannot determine home directory: %w", err)
}
- binPath := filepath.Join(home, "opt", "postgresql", version, "bin", "postgres")
- if _, err := os.Stat(binPath); err != nil {
- return "", fmt.Errorf("PostgreSQL binary not found at %s: %w", binPath, err)
+ binDir := filepath.Join(home, "opt", "postgresql", version, "bin")
+ required := []string{"postgres", "initdb", "pg_ctl", "psql", "pg_basebackup"}
+ for _, name := range required {
+ path := filepath.Join(binDir, name)
+ if _, err := os.Stat(path); err != nil {
+ return "", fmt.Errorf("required PostgreSQL binary %q not found at %s: %w", name, path, err)
+ }
}
- return binPath, nil
+ return filepath.Join(binDir, "postgres"), nil
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@providers/postgresql/postgresql.go` around lines 66 - 76, Update FindBinary
to validate all PostgreSQL tool binaries required by this provider (not just
"postgres") before returning success: check for "initdb", "pg_ctl", "psql",
"pg_basebackup" (in addition to "postgres") in the same binPath derived in
FindBinary, and return a clear error listing any missing tools so CreateSandbox
and CreateReplica won’t later fail; reference the existing FindBinary function
and the callers CreateSandbox/CreateReplica when making the change so the
validations cover the binaries those methods depend on.
| // Write lifecycle scripts | ||
| scripts := GenerateScripts(ScriptOptions{ | ||
| SandboxDir: config.Dir, | ||
| DataDir: dataDir, | ||
| BinDir: binDir, | ||
| LibDir: libDir, | ||
| Port: config.Port, | ||
| LogFile: logFile, | ||
| }) | ||
| for name, content := range scripts { | ||
| scriptPath := filepath.Join(config.Dir, name) | ||
| if err := os.WriteFile(scriptPath, []byte(content), 0755); err != nil { | ||
| os.RemoveAll(config.Dir) | ||
| return nil, fmt.Errorf("writing script %s: %w", name, err) | ||
| } | ||
| } |
There was a problem hiding this comment.
The clear script will destroy replication state on replicas.
The GenerateScripts function (see providers/postgresql/scripts.go:41-42) creates a clear script that runs rm -rf <dataDir> followed by initdb. When this script is written to a replica, running it will:
- Delete the
standby.signalfile created bypg_basebackup -R - Remove the
postgresql.auto.confwith replication connection settings - Re-initialize as a standalone instance
Consider either:
- Generating a replica-specific
clearscript that preserves replication setup - Omitting the
clearscript for replicas - Adding a warning message to the replica's
clearscript
🧰 Tools
🪛 GitHub Check: Lint
[failure] 76-76:
G306: Expect WriteFile permissions to be 0600 or less (gosec)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@providers/postgresql/replication.go` around lines 65 - 80, The generated
"clear" script (created by GenerateScripts) will wipe replication state on
replicas; update the script generation and writing logic so replicas don't get a
destructive clear: modify GenerateScripts (providers/postgresql/scripts.go) to
accept a flag (e.g., IsReplica or Role) or add a Replica-safe option, and then
in replication.go when calling GenerateScripts pass that flag or filter the
returned scripts map to either (a) remove the "clear" entry for replicas, or (b)
replace "clear" with a replica-safe variant that preserves standby.signal and
postgresql.auto.conf (or contains a prominent warning). Ensure the change
references GenerateScripts and the scripts map used in the for loop so the
"clear" file is not written to replica sandboxes.
| if isPgsql { | ||
| b.WriteString("pgsql_variables=\n{\n") | ||
| b.WriteString(fmt.Sprintf(" interfaces=\"%s:%d\"\n", cfg.AdminHost, cfg.MySQLPort)) | ||
| b.WriteString(fmt.Sprintf(" monitor_username=\"%s\"\n", cfg.MonitorUser)) | ||
| b.WriteString(fmt.Sprintf(" monitor_password=\"%s\"\n", cfg.MonitorPass)) | ||
| b.WriteString("}\n\n") | ||
| } else { | ||
| b.WriteString("mysql_variables=\n{\n") | ||
| b.WriteString(fmt.Sprintf(" interfaces=\"%s:%d\"\n", cfg.AdminHost, cfg.MySQLPort)) | ||
| b.WriteString(fmt.Sprintf(" monitor_username=\"%s\"\n", cfg.MonitorUser)) | ||
| b.WriteString(fmt.Sprintf(" monitor_password=\"%s\"\n", cfg.MonitorPass)) | ||
| b.WriteString(" monitor_connect_interval=2000\n") | ||
| b.WriteString(" monitor_ping_interval=2000\n") | ||
| b.WriteString("}\n\n") | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "== ProxySQL defaults feeding pgsql_variables/pgsql_users =="
rg -n -C2 --type=go --type=sh 'monitor_user|monitor_password|backend_provider|msandbox'
echo
echo "== PostgreSQL bootstrap / role creation paths =="
rg -n -C2 --type=go --type=sh 'CREATE ROLE|CREATE USER|createuser|ALTER ROLE|psql.*-c|msandbox|--username=postgres'Repository: ProxySQL/dbdeployer
Length of output: 19587
ProxySQL monitoring will fail for PostgreSQL backends due to missing user provisioning.
The PostgreSQL bootstrap (providers/postgresql/sandbox.go) only initializes the postgres superuser via initdb, but the ProxySQL configuration hardcodes monitor_user="msandbox" in the topology and passes it through to both MySQL and PostgreSQL config blocks. PostgreSQL never creates an msandbox role, so ProxySQL's monitor credentials will be invalid.
Either:
- Create an
msandboxrole in the PostgreSQL bootstrap before ProxySQL attempts to connect, or - Provide a different monitor user (e.g.,
postgres) specific to the PostgreSQL backend provider.
This applies to lines 40–54 and 84–90 where cfg.MonitorUser/cfg.MonitorPass are written to pgsql_variables.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@providers/proxysql/config.go` around lines 40 - 54, The pgsql_variables block
currently writes cfg.MonitorUser/cfg.MonitorPass (same as MySQL) causing
ProxySQL monitor auth to fail for Postgres; modify the logic in
providers/proxysql/config.go (the isPgsql branch that builds "pgsql_variables")
to use a Postgres-specific monitor credential (e.g.,
cfg.PGMonitorUser/cfg.PGMonitorPass or fallback to "postgres" and its password)
instead of cfg.MonitorUser/cfg.MonitorPass so the monitor role actually exists,
and update any references that write monitor_username/monitor_password for
pgsql_variables accordingly (leave mysql_variables unchanged).
| if config.Options["backend_provider"] == "postgresql" { | ||
| scripts["use_proxy"] = fmt.Sprintf("#!/bin/bash\npsql -h %s -p %d -U %s \"$@\"\n", | ||
| host, mysqlPort, monitorUser) | ||
| } else { | ||
| scripts["use_proxy"] = fmt.Sprintf("#!/bin/bash\nmysql -h %s -P %d -u %s -p%s --prompt 'ProxySQL> ' \"$@\"\n", | ||
| host, mysqlPort, monitorUser, monitorPass) |
There was a problem hiding this comment.
Keep use_proxy non-interactive for PostgreSQL.
The new psql wrapper drops monitorPass, so the generated helper now prompts on every invocation instead of behaving like the MySQL variant. That breaks scripted use of the sandbox helper.
💡 Proposed fix
if config.Options["backend_provider"] == "postgresql" {
- scripts["use_proxy"] = fmt.Sprintf("#!/bin/bash\npsql -h %s -p %d -U %s \"$@\"\n",
- host, mysqlPort, monitorUser)
+ scripts["use_proxy"] = fmt.Sprintf("#!/bin/bash\nPGPASSWORD=%q psql -h %s -p %d -U %s \"$@\"\n",
+ monitorPass, host, mysqlPort, monitorUser)
} else {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if config.Options["backend_provider"] == "postgresql" { | |
| scripts["use_proxy"] = fmt.Sprintf("#!/bin/bash\npsql -h %s -p %d -U %s \"$@\"\n", | |
| host, mysqlPort, monitorUser) | |
| } else { | |
| scripts["use_proxy"] = fmt.Sprintf("#!/bin/bash\nmysql -h %s -P %d -u %s -p%s --prompt 'ProxySQL> ' \"$@\"\n", | |
| host, mysqlPort, monitorUser, monitorPass) | |
| if config.Options["backend_provider"] == "postgresql" { | |
| scripts["use_proxy"] = fmt.Sprintf("#!/bin/bash\nPGPASSWORD=%q psql -h %s -p %d -U %s \"$@\"\n", | |
| monitorPass, host, mysqlPort, monitorUser) | |
| } else { | |
| scripts["use_proxy"] = fmt.Sprintf("#!/bin/bash\nmysql -h %s -P %d -u %s -p%s --prompt 'ProxySQL> ' \"$@\"\n", | |
| host, mysqlPort, monitorUser, monitorPass) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@providers/proxysql/proxysql.go` around lines 115 - 120, The PostgreSQL
`use_proxy` wrapper currently omits `monitorPass` causing psql to prompt
interactively; update the scripts["use_proxy"] generation to pass the monitor
password non-interactively (e.g., export or prefix PGPASSWORD with monitorPass)
and invoke psql with the same args (host, mysqlPort, monitorUser, "$@") so it
mirrors the MySQL helper behavior; locate the block that builds
scripts["use_proxy"] and include monitorPass (variable monitorPass) in the
wrapper in a safe-quoted way to avoid prompting.
9828911 to
0a55662
Compare
There was a problem hiding this comment.
Actionable comments posted: 5
♻️ Duplicate comments (9)
cmd/replication.go (3)
167-173:⚠️ Potential issue | 🔴 CriticalValidate
argsbefore the provider-specific fast path.The non-MySQL branch returns before the existing
common.CheckOrigin(args)call, butdeployReplicationNonMySQLreadsargs[0]immediately.dbdeployer deploy replication --provider=postgresqlwithout a version will panic instead of returning a normal CLI error.Minimal fix
func replicationSandbox(cmd *cobra.Command, args []string) { flags := cmd.Flags() + common.CheckOrigin(args) providerName, _ := flags.GetString(globals.ProviderLabel) if providerName != "mysql" { deployReplicationNonMySQL(cmd, args, providerName) return } - - common.CheckOrigin(args)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cmd/replication.go` around lines 167 - 173, The non-MySQL fast path in replicationSandbox returns before validating args, causing deployReplicationNonMySQL to index args[0] and panic; move or invoke the existing validation (common.CheckOrigin(args)) before the provider branch (or add an args length check at the start of deployReplicationNonMySQL) so the CLI returns a proper error when no version is provided; update replicationSandbox to call common.CheckOrigin(args) before checking providerName (and propagate/handle the error) or add an explicit guard in deployReplicationNonMySQL that validates args length and returns a CLI-friendly error instead of indexing args[0].
67-71:⚠️ Potential issue | 🟠 MajorAdd one rollback path for partial topologies and handle write errors.
After
topologyDiris created, failures in primary startup, replica creation, monitoring-script writes, or ProxySQL deployment leave a half-built topology behind, and the uncheckedMkdirAll/WriteFilecalls can hide permission or disk failures. Clean uptopologyDiron every error and propagate the filesystem errors eagerly.Also applies to: 86-94, 120-126, 145-149, 157-159
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cmd/replication.go` around lines 67 - 71, The code currently calls os.MkdirAll(topologyDir, 0755) and performs WriteFile calls without checking or propagating errors, leaving partial topologies on failure; update all spots referencing topologyDir, DirExists, MkdirAll, and WriteFile (including the blocks around primary startup, replica creation, monitoring-script writes, and ProxySQL deployment) to: check and return any error from os.MkdirAll immediately, check and return errors from all ioutil.WriteFile/File writes, and on any subsequent failure clean up the partially-created topology by calling os.RemoveAll(topologyDir) before returning the error so partial directories/files are removed and filesystem errors are propagated to callers.
90-99:⚠️ Potential issue | 🔴 CriticalDon't allow
--skip-startwhile replicas are cloned from the primary.
CreateReplicadepends onpg_basebackupfrom the primary, so skipping the primary start makes replica creation fail whileprimaryInfois still marked"running".Minimal safe fix
skipStart, _ := flags.GetBool(globals.SkipStartLabel) - if !skipStart { - if err := p.StartSandbox(primaryDir); err != nil { - common.Exitf(1, "error starting primary: %s", err) - } + if skipStart { + common.Exit(1, "--skip-start is not supported with non-MySQL replication providers") + } + if err := p.StartSandbox(primaryDir); err != nil { + common.Exitf(1, "error starting primary: %s", err) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cmd/replication.go` around lines 90 - 99, The code currently lets --skip-start bypass starting the primary while still creating replicas that rely on pg_basebackup; detect when replicas will be cloned (the code path that calls CreateReplica or checks replica config) and prevent skipStart in that case: if skipStart is true and there are replicas to create, return a fatal error or force-start the primary by calling p.StartSandbox(primaryDir) before replica creation; also ensure primaryInfo.Status is only set to "running" after StartSandbox succeeds (or set it to "stopped" if skipStart is honored and no replicas exist). Reference symbols: skipStart (flags.GetBool), p.StartSandbox, CreateReplica, and primaryInfo.providers/postgresql/replication.go (1)
65-80:⚠️ Potential issue | 🟠 MajorReplicas still inherit the destructive primary
clearscript.
CreateReplicagenerates the generic lifecycle script set for replicas, so a replica-sideclearcan wipe thestandby.signaland recovery config created bypg_basebackup -R, then restart the node as a standalone instance.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@providers/postgresql/replication.go` around lines 65 - 80, CreateReplica currently generates the full lifecycle script set via GenerateScripts and writes every entry in the scripts map, which causes replicas to inherit the destructive primary "clear" script; update CreateReplica to filter or override the "clear" entry before writing: after calling GenerateScripts(ScriptOptions{...}) inspect the scripts map returned and either delete scripts["clear"] or replace it with a safe replica-specific implementation that only removes standby-specific files (e.g., clears standby.signal and recovery.conf/state) and does not wipe the whole data directory, then proceed with the existing file write loop that iterates over the scripts map.cmd/unpack.go (2)
43-66:⚠️ Potential issue | 🟠 MajorPostgreSQL unpack still ignores destination and overwrite semantics.
This branch always unpacks into
$HOME/opt/postgresql/<version>and never uses--sandbox-binary,--overwrite, or--dry-run, so accepted flags are silently ignored for the PostgreSQL path.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cmd/unpack.go` around lines 43 - 66, The PostgreSQL unpack branch currently hardcodes targetDir and ignores --sandbox-binary, --overwrite and --dry-run; update the branch that handles providerName == "postgresql" to read the same flags used elsewhere (e.g., sandboxBinary := flags.GetString(globals.SandboxBinary), overwrite := flags.GetBool(globals.Overwrite), dryRun := flags.GetBool(globals.DryRun)), compute targetDir: use sandboxBinary when provided, otherwise fall back to filepath.Join(home, "opt", "postgresql", version) (same as before), and then call postgresql.UnpackDebs (or a new postgresql.UnpackDebsWithOptions) passing targetDir and the overwrite/dry-run flags so those semantics are honored; ensure error handling stays the same and keep symbols postgresql.ClassifyDebs, postgresql.ParseDebVersion and postgresql.UnpackDebs referenced/updated accordingly.
43-67:⚠️ Potential issue | 🟠 MajorReject unsupported
--providervalues early.Any value other than
"postgresql"currently falls through to the MySQL tarball path. A typo like--provider=postresqlwill be handled as MySQL and fail in a misleading way instead of returning a direct CLI error.Possible fix
providerName, _ := flags.GetString(globals.ProviderLabel) - if providerName == "postgresql" { + switch providerName { + case "mysql": + // continue with the tarball flow below + case "postgresql": if len(args) < 2 { common.Exitf(1, "PostgreSQL unpack requires both server and client .deb files\n"+ "Usage: dbdeployer unpack --provider=postgresql postgresql-16_*.deb postgresql-client-16_*.deb") @@ fmt.Printf("PostgreSQL %s unpacked to %s\n", version, targetDir) return + default: + common.Exitf(1, "unsupported provider %q", providerName) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cmd/unpack.go` around lines 43 - 67, The code currently treats any providerName not equal to "postgresql" as the MySQL path, causing typos like "--provider=postresql" to be misrouted; add explicit validation for flags.GetString(globals.ProviderLabel) (providerName) before the existing if block so unsupported values produce a clear CLI error (Exitf) listing supported providers; specifically, check providerName against allowed set (e.g., "postgresql" and "mysql" if applicable) and call common.Exitf with a helpful message when providerName is unrecognized, leaving the existing postgresql.ClassifyDebs, ParseDebVersion, UnpackDebs, and the MySQL/tarball handling unchanged.providers/postgresql/postgresql.go (2)
66-76:⚠️ Potential issue | 🟠 Major
FindBinaryonly validatespostgres, not other required tools.This was flagged in a previous review.
CreateSandboxandCreateReplicadepend oninitdb,pg_ctl,psql, andpg_basebackup. A partial unpack can pass this check and fail later during deployment.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@providers/postgresql/postgresql.go` around lines 66 - 76, FindBinary currently only checks for the "postgres" binary which allows partial installs to pass; update FindBinary (used by CreateSandbox and CreateReplica) to verify the presence of all required executables ("postgres", "initdb", "pg_ctl", "psql", "pg_basebackup") in the same versioned bin directory, and return a clear error listing any missing binaries instead of only "postgres"; locate the binPath construction and os.Stat usage in FindBinary and extend it to iterate the required tool names, aggregating missing entries and returning an error if any are absent.
22-37:⚠️ Potential issue | 🟡 MinorNegative version parts are accepted (e.g.,
"16.-1").This issue was flagged in a previous review.
strconv.Atoiparses negative numbers successfully, so versions like"16.-1"pass validation. Additionally,ValidateVersionandVersionToPortduplicate the parsing logic—consider extracting a shared parser that validates non-negative integers.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@providers/postgresql/postgresql.go` around lines 22 - 37, ValidateVersion currently accepts negative minor (or major) parts because strconv.Atoi allows negatives; update validation in ValidateVersion to reject any part that starts with '-' or otherwise is negative after parsing, and mirror this check in VersionToPort by extracting a shared parse function (e.g., parsePostgresVersion) that splits the version string, ensures exactly two components, parses both with strconv.Atoi, and returns an error if either value is negative; then update ValidateVersion and VersionToPort to call this new parser to remove duplicated parsing logic.cmd/multiple.go (1)
71-71:⚠️ Potential issue | 🔴 CriticalUnchecked error from
os.MkdirAllcauses CI failure.This issue was flagged in a previous review and confirmed by the pipeline failure. The error return value must be checked.
🐛 Proposed fix
- os.MkdirAll(topologyDir, 0755) + if err := os.MkdirAll(topologyDir, 0755); err != nil { + common.Exitf(1, "error creating topology directory %s: %s", topologyDir, err) + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cmd/multiple.go` at line 71, The call to os.MkdirAll(topologyDir, 0755) ignores its error return; change it to capture and handle the error (err := os.MkdirAll(topologyDir, 0755)) and then either propagate the error from the enclosing function or log and exit (e.g., processLogger.Errorf / log.Fatalf with context including topologyDir) so CI no longer fails due to unchecked errors from os.MkdirAll.
🧹 Nitpick comments (3)
providers/postgresql/integration_test.go (2)
16-30: Minor: ignored error fromos.UserHomeDir()Line 18 discards the error from
os.UserHomeDir(). If the home directory cannot be determined, the subsequentReadDirwould fail anyway, but capturing the error provides clearer diagnostics.func findPostgresVersion(t *testing.T) string { t.Helper() - home, _ := os.UserHomeDir() + home, err := os.UserHomeDir() + if err != nil { + t.Skipf("cannot determine home directory: %v", err) + } entries, err := os.ReadDir(filepath.Join(home, "opt", "postgresql"))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@providers/postgresql/integration_test.go` around lines 16 - 30, The helper findPostgresVersion currently ignores the error returned by os.UserHomeDir(); update it to capture and check that error before using home (i.e., assign home, err := os.UserHomeDir()), and if err != nil call t.Skipf with a clear message including the error. Keep the rest of the logic (os.ReadDir on filepath.Join(home, "opt", "postgresql"), iterating entries) unchanged; reference findPostgresVersion to locate the change.
67-67: Consider polling for readiness instead of fixed sleep.Using
time.Sleep(2 * time.Second)is fragile—on slower CI runners this may be insufficient, while on fast machines it wastes time. Consider polling the sandbox status or usingpg_isreadyuntil the server accepts connections.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@providers/postgresql/integration_test.go` at line 67, Replace the brittle time.Sleep(2 * time.Second) wait with a readiness poll: remove the fixed sleep and implement a loop that repeatedly checks Postgres readiness (e.g., by invoking "pg_isready" or attempting a sql.Open(...) followed by db.Ping()) with a short backoff until success or a deadline; reference the existing time.Sleep call in integration_test.go and replace it with this polling logic (or a new helper like waitForPostgresReady/StartPostgresSandbox readiness check) to ensure tests proceed as soon as the server accepts connections and fail fast on timeout.cmd/multiple.go (1)
83-90: HardcodedDbUser: "postgres"limits provider extensibility.While this works for PostgreSQL (the only non-MySQL provider currently supporting "multiple" topology), hardcoding the database user will break when additional providers are added.
Consider deriving
DbUserfrom the provider or making it configurable:+ // Derive default user based on provider + dbUser := "postgres" // default for postgresql + if providerName != "postgresql" { + dbUser = "admin" // or fetch from provider metadata + } + config := providers.SandboxConfig{ Version: version, Dir: nodeDir, Port: port, Host: "127.0.0.1", - DbUser: "postgres", + DbUser: dbUser, Options: map[string]string{}, }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cmd/multiple.go` around lines 83 - 90, The config currently hardcodes DbUser: "postgres" in the providers.SandboxConfig initialization which prevents other providers from working; update the code that builds the SandboxConfig (the config variable in cmd/multiple.go) to set DbUser dynamically—either read it from the selected provider (add a method/field on the provider interface that returns the default DB user) or add a CLI/config option and use that value (e.g., a --db-user flag) when populating config.DbUser; ensure any provider-specific defaults are used when the flag is not provided.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@cmd/single.go`:
- Around line 497-516: After successfully calling p.CreateSandbox(config) you
must roll back the partially created sandbox on any subsequent failure; update
the error paths after p.StartSandbox(sandboxDir) and after
sandbox.DeployProxySQLForTopology(...) so they perform cleanup: if StartSandbox
succeeded (i.e. !skipStart and StartSandbox returned nil previously) call the
provider stop method (e.g. p.StopSandbox(sandboxDir) or equivalent) to stop the
running sandbox, then remove the sandboxDir from disk (os.RemoveAll(sandboxDir))
before calling common.Exitf; likewise if DeployProxySQLForTopology fails, ensure
you stop the sandbox if it was started and remove sandboxDir to avoid leaving a
stale directory that blocks future retries. Use the existing symbols
p.CreateSandbox, p.StartSandbox, sandbox.DeployProxySQLForTopology, sandboxDir
and providerName to locate where to add the cleanup logic.
- Around line 526-529: The non-MySQL early return skips validation of the
positional version and causes deploySingleNonMySQL to dereference args[0]
unsafely; move or duplicate the positional/version validation so it's executed
before the providerName != "mysql" branch (e.g., call common.CheckOrigin(args)
or check len(args) and validate args[0] prior to calling deploySingleNonMySQL),
ensuring deploySingleNonMySQL and any other non-MySQL path never assume args[0]
exists.
In `@cmd/unpack.go`:
- Around line 93-95: Restore strict positional validation by replacing the
unpackCmd.Args = cobra.MinimumNArgs(1) with a custom validator function that
checks the provider and enforces the correct arity: implement a function
referenced in unpackCmd.Args that examines the provider (from the flags/args the
command uses, same logic that determines MySQL vs PostgreSQL) and returns
cobra.ExactArgs(1) for MySQL and cobra.ExactArgs(2) for PostgreSQL (or an error
for unknown providers); update the validator to use the same provider-detection
code path used when consuming args[0] in the unpack command so the validation
matches how unpack handles arguments (ensure you reference unpackCmd and the
provider-detection logic when locating where to apply the change).
In `@providers/postgresql/integration_test.go`:
- Around line 142-159: The test currently runs psql commands and only checks for
execution errors but does not assert the returned values; update the replication
check (the exec.Command that runs "SELECT count(*) FROM pg_stat_replication;")
to parse and trim cmd.CombinedOutput() into an integer and use
t.Fatalf/t.Fatalf-like assertion to require the count == 2, and update the
per-replica loop (the exec.Command that runs "SELECT pg_is_in_recovery();") to
parse the output string (trim whitespace/newlines) and assert it equals "t" (or
the expected true representation) for each port, failing the test with a clear
message if the values do not match.
In `@providers/postgresql/sandbox.go`:
- Around line 19-37: The code currently creates logDir (variable logDir via
os.MkdirAll) inside dataDir before running initdb, which causes initdb
(initdbPath / initCmd) to fail because the target directory must not contain
files; move the os.MkdirAll(logDir, 0755) call so it runs after
initCmd.CombinedOutput() succeeds (i.e., after initdb completes) and keep the
cleanup call (os.RemoveAll(config.Dir)) on initdb failure unchanged.
---
Duplicate comments:
In `@cmd/multiple.go`:
- Line 71: The call to os.MkdirAll(topologyDir, 0755) ignores its error return;
change it to capture and handle the error (err := os.MkdirAll(topologyDir,
0755)) and then either propagate the error from the enclosing function or log
and exit (e.g., processLogger.Errorf / log.Fatalf with context including
topologyDir) so CI no longer fails due to unchecked errors from os.MkdirAll.
In `@cmd/replication.go`:
- Around line 167-173: The non-MySQL fast path in replicationSandbox returns
before validating args, causing deployReplicationNonMySQL to index args[0] and
panic; move or invoke the existing validation (common.CheckOrigin(args)) before
the provider branch (or add an args length check at the start of
deployReplicationNonMySQL) so the CLI returns a proper error when no version is
provided; update replicationSandbox to call common.CheckOrigin(args) before
checking providerName (and propagate/handle the error) or add an explicit guard
in deployReplicationNonMySQL that validates args length and returns a
CLI-friendly error instead of indexing args[0].
- Around line 67-71: The code currently calls os.MkdirAll(topologyDir, 0755) and
performs WriteFile calls without checking or propagating errors, leaving partial
topologies on failure; update all spots referencing topologyDir, DirExists,
MkdirAll, and WriteFile (including the blocks around primary startup, replica
creation, monitoring-script writes, and ProxySQL deployment) to: check and
return any error from os.MkdirAll immediately, check and return errors from all
ioutil.WriteFile/File writes, and on any subsequent failure clean up the
partially-created topology by calling os.RemoveAll(topologyDir) before returning
the error so partial directories/files are removed and filesystem errors are
propagated to callers.
- Around line 90-99: The code currently lets --skip-start bypass starting the
primary while still creating replicas that rely on pg_basebackup; detect when
replicas will be cloned (the code path that calls CreateReplica or checks
replica config) and prevent skipStart in that case: if skipStart is true and
there are replicas to create, return a fatal error or force-start the primary by
calling p.StartSandbox(primaryDir) before replica creation; also ensure
primaryInfo.Status is only set to "running" after StartSandbox succeeds (or set
it to "stopped" if skipStart is honored and no replicas exist). Reference
symbols: skipStart (flags.GetBool), p.StartSandbox, CreateReplica, and
primaryInfo.
In `@cmd/unpack.go`:
- Around line 43-66: The PostgreSQL unpack branch currently hardcodes targetDir
and ignores --sandbox-binary, --overwrite and --dry-run; update the branch that
handles providerName == "postgresql" to read the same flags used elsewhere
(e.g., sandboxBinary := flags.GetString(globals.SandboxBinary), overwrite :=
flags.GetBool(globals.Overwrite), dryRun := flags.GetBool(globals.DryRun)),
compute targetDir: use sandboxBinary when provided, otherwise fall back to
filepath.Join(home, "opt", "postgresql", version) (same as before), and then
call postgresql.UnpackDebs (or a new postgresql.UnpackDebsWithOptions) passing
targetDir and the overwrite/dry-run flags so those semantics are honored; ensure
error handling stays the same and keep symbols postgresql.ClassifyDebs,
postgresql.ParseDebVersion and postgresql.UnpackDebs referenced/updated
accordingly.
- Around line 43-67: The code currently treats any providerName not equal to
"postgresql" as the MySQL path, causing typos like "--provider=postresql" to be
misrouted; add explicit validation for flags.GetString(globals.ProviderLabel)
(providerName) before the existing if block so unsupported values produce a
clear CLI error (Exitf) listing supported providers; specifically, check
providerName against allowed set (e.g., "postgresql" and "mysql" if applicable)
and call common.Exitf with a helpful message when providerName is unrecognized,
leaving the existing postgresql.ClassifyDebs, ParseDebVersion, UnpackDebs, and
the MySQL/tarball handling unchanged.
In `@providers/postgresql/postgresql.go`:
- Around line 66-76: FindBinary currently only checks for the "postgres" binary
which allows partial installs to pass; update FindBinary (used by CreateSandbox
and CreateReplica) to verify the presence of all required executables
("postgres", "initdb", "pg_ctl", "psql", "pg_basebackup") in the same versioned
bin directory, and return a clear error listing any missing binaries instead of
only "postgres"; locate the binPath construction and os.Stat usage in FindBinary
and extend it to iterate the required tool names, aggregating missing entries
and returning an error if any are absent.
- Around line 22-37: ValidateVersion currently accepts negative minor (or major)
parts because strconv.Atoi allows negatives; update validation in
ValidateVersion to reject any part that starts with '-' or otherwise is negative
after parsing, and mirror this check in VersionToPort by extracting a shared
parse function (e.g., parsePostgresVersion) that splits the version string,
ensures exactly two components, parses both with strconv.Atoi, and returns an
error if either value is negative; then update ValidateVersion and VersionToPort
to call this new parser to remove duplicated parsing logic.
In `@providers/postgresql/replication.go`:
- Around line 65-80: CreateReplica currently generates the full lifecycle script
set via GenerateScripts and writes every entry in the scripts map, which causes
replicas to inherit the destructive primary "clear" script; update CreateReplica
to filter or override the "clear" entry before writing: after calling
GenerateScripts(ScriptOptions{...}) inspect the scripts map returned and either
delete scripts["clear"] or replace it with a safe replica-specific
implementation that only removes standby-specific files (e.g., clears
standby.signal and recovery.conf/state) and does not wipe the whole data
directory, then proceed with the existing file write loop that iterates over the
scripts map.
---
Nitpick comments:
In `@cmd/multiple.go`:
- Around line 83-90: The config currently hardcodes DbUser: "postgres" in the
providers.SandboxConfig initialization which prevents other providers from
working; update the code that builds the SandboxConfig (the config variable in
cmd/multiple.go) to set DbUser dynamically—either read it from the selected
provider (add a method/field on the provider interface that returns the default
DB user) or add a CLI/config option and use that value (e.g., a --db-user flag)
when populating config.DbUser; ensure any provider-specific defaults are used
when the flag is not provided.
In `@providers/postgresql/integration_test.go`:
- Around line 16-30: The helper findPostgresVersion currently ignores the error
returned by os.UserHomeDir(); update it to capture and check that error before
using home (i.e., assign home, err := os.UserHomeDir()), and if err != nil call
t.Skipf with a clear message including the error. Keep the rest of the logic
(os.ReadDir on filepath.Join(home, "opt", "postgresql"), iterating entries)
unchanged; reference findPostgresVersion to locate the change.
- Line 67: Replace the brittle time.Sleep(2 * time.Second) wait with a readiness
poll: remove the fixed sleep and implement a loop that repeatedly checks
Postgres readiness (e.g., by invoking "pg_isready" or attempting a sql.Open(...)
followed by db.Ping()) with a short backoff until success or a deadline;
reference the existing time.Sleep call in integration_test.go and replace it
with this polling logic (or a new helper like
waitForPostgresReady/StartPostgresSandbox readiness check) to ensure tests
proceed as soon as the server accepts connections and fail fast on timeout.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: f5aa3b97-5707-45de-af46-ff0486fd3206
📒 Files selected for processing (25)
cmd/deploy_postgresql.gocmd/export_test.gocmd/multiple.gocmd/replication.gocmd/root.gocmd/single.gocmd/unpack.goglobals/globals.goproviders/mysql/mysql.goproviders/postgresql/config.goproviders/postgresql/config_test.goproviders/postgresql/integration_test.goproviders/postgresql/postgresql.goproviders/postgresql/postgresql_test.goproviders/postgresql/replication.goproviders/postgresql/sandbox.goproviders/postgresql/scripts.goproviders/postgresql/unpack.goproviders/postgresql/unpack_test.goproviders/provider.goproviders/provider_test.goproviders/proxysql/config.goproviders/proxysql/config_test.goproviders/proxysql/proxysql.gosandbox/proxysql_topology.go
✅ Files skipped from review due to trivial changes (6)
- providers/postgresql/config_test.go
- providers/postgresql/unpack_test.go
- providers/postgresql/unpack.go
- providers/postgresql/config.go
- providers/postgresql/postgresql_test.go
- providers/postgresql/scripts.go
🚧 Files skipped from review as they are similar to previous changes (6)
- cmd/root.go
- sandbox/proxysql_topology.go
- providers/proxysql/config_test.go
- providers/provider_test.go
- cmd/deploy_postgresql.go
- providers/proxysql/proxysql.go
| if _, err := p.CreateSandbox(config); err != nil { | ||
| common.Exitf(1, "error creating sandbox: %s", err) | ||
| } | ||
|
|
||
| if !skipStart { | ||
| if err := p.StartSandbox(sandboxDir); err != nil { | ||
| common.Exitf(1, "error starting sandbox: %s", err) | ||
| } | ||
| } | ||
|
|
||
| // Handle --with-proxysql | ||
| withProxySQL, _ := flags.GetBool("with-proxysql") | ||
| if withProxySQL { | ||
| if !providers.ContainsString(providers.CompatibleAddons["proxysql"], providerName) { | ||
| common.Exitf(1, "--with-proxysql is not compatible with provider %q", providerName) | ||
| } | ||
| err := sandbox.DeployProxySQLForTopology(sandboxDir, port, nil, 0, "127.0.0.1", providerName) | ||
| if err != nil { | ||
| common.Exitf(1, "ProxySQL deployment failed: %s", err) | ||
| } |
There was a problem hiding this comment.
Roll back the sandbox on post-create failures.
If StartSandbox or DeployProxySQLForTopology fails, the new directory remains on disk and the next retry hits the existing-dir guard. This path needs a cleanup routine that stops the sandbox if it was started and removes the partially created directory.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cmd/single.go` around lines 497 - 516, After successfully calling
p.CreateSandbox(config) you must roll back the partially created sandbox on any
subsequent failure; update the error paths after p.StartSandbox(sandboxDir) and
after sandbox.DeployProxySQLForTopology(...) so they perform cleanup: if
StartSandbox succeeded (i.e. !skipStart and StartSandbox returned nil
previously) call the provider stop method (e.g. p.StopSandbox(sandboxDir) or
equivalent) to stop the running sandbox, then remove the sandboxDir from disk
(os.RemoveAll(sandboxDir)) before calling common.Exitf; likewise if
DeployProxySQLForTopology fails, ensure you stop the sandbox if it was started
and remove sandboxDir to avoid leaving a stale directory that blocks future
retries. Use the existing symbols p.CreateSandbox, p.StartSandbox,
sandbox.DeployProxySQLForTopology, sandboxDir and providerName to locate where
to add the cleanup logic.
| // Non-MySQL providers: bypass fillSandboxDefinition entirely | ||
| if providerName != "mysql" { | ||
| deploySingleNonMySQL(cmd, args, providerName) | ||
| return |
There was a problem hiding this comment.
Validate the positional version before the non-MySQL early return.
This branch bypasses the later common.CheckOrigin(args) call, but deploySingleNonMySQL dereferences args[0] immediately. dbdeployer deploy single --provider=postgresql without a version will panic instead of returning a CLI error.
Minimal fix
func singleSandbox(cmd *cobra.Command, args []string) {
flags := cmd.Flags()
+ common.CheckOrigin(args)
providerName, _ := flags.GetString(globals.ProviderLabel)
// Non-MySQL providers: bypass fillSandboxDefinition entirely
if providerName != "mysql" {
deploySingleNonMySQL(cmd, args, providerName)
return
}
-
- common.CheckOrigin(args)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cmd/single.go` around lines 526 - 529, The non-MySQL early return skips
validation of the positional version and causes deploySingleNonMySQL to
dereference args[0] unsafely; move or duplicate the positional/version
validation so it's executed before the providerName != "mysql" branch (e.g.,
call common.CheckOrigin(args) or check len(args) and validate args[0] prior to
calling deploySingleNonMySQL), ensuring deploySingleNonMySQL and any other
non-MySQL path never assume args[0] exists.
| var unpackCmd = &cobra.Command{ | ||
| Use: "unpack MySQL-tarball", | ||
| Args: cobra.ExactArgs(1), | ||
| Args: cobra.MinimumNArgs(1), |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
find . -name "unpack.go" -type f | head -20Repository: ProxySQL/dbdeployer
Length of output: 146
🏁 Script executed:
wc -l ./cmd/unpack.goRepository: ProxySQL/dbdeployer
Length of output: 82
🏁 Script executed:
cat -n ./cmd/unpack.go | head -100Repository: ProxySQL/dbdeployer
Length of output: 4438
🏁 Script executed:
grep -n "ProviderLabel" ./cmd/unpack.goRepository: ProxySQL/dbdeployer
Length of output: 251
🏁 Script executed:
grep -r "ProviderLabel\|ProviderValue" ./globals/ --include="*.go" | head -20Repository: ProxySQL/dbdeployer
Length of output: 177
🏁 Script executed:
cat -n ./cmd/unpack.go | sed -n '93,131p'Repository: ProxySQL/dbdeployer
Length of output: 2799
Restore strict positional validation for the MySQL path.
With cobra.MinimumNArgs(1), the command accepts dbdeployer unpack a.tar.gz b.tar.gz and the second positional argument is silently ignored for MySQL (only args[0] is consumed at lines 76 and 88). PostgreSQL requires exactly 2 arguments but validates this at runtime (line 45) rather than upfront. Implement provider-aware validation in the Args field to enforce ExactArgs(1) for MySQL and require exactly 2 arguments for PostgreSQL.
Possible fix
- Args: cobra.MinimumNArgs(1),
+ Args: func(cmd *cobra.Command, args []string) error {
+ providerName, _ := cmd.Flags().GetString(globals.ProviderLabel)
+ if providerName == "postgresql" {
+ if len(args) < 2 {
+ return fmt.Errorf("postgresql unpack requires server and client .deb files")
+ }
+ return nil
+ }
+ return cobra.ExactArgs(1)(cmd, args)
+ },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cmd/unpack.go` around lines 93 - 95, Restore strict positional validation by
replacing the unpackCmd.Args = cobra.MinimumNArgs(1) with a custom validator
function that checks the provider and enforces the correct arity: implement a
function referenced in unpackCmd.Args that examines the provider (from the
flags/args the command uses, same logic that determines MySQL vs PostgreSQL) and
returns cobra.ExactArgs(1) for MySQL and cobra.ExactArgs(2) for PostgreSQL (or
an error for unknown providers); update the validator to use the same
provider-detection code path used when consuming args[0] in the unpack command
so the validation matches how unpack handles arguments (ensure you reference
unpackCmd and the provider-detection logic when locating where to apply the
change).
| cmd := exec.Command(psql, "-h", "127.0.0.1", "-p", "15500", "-U", "postgres", "-t", "-c", | ||
| "SELECT count(*) FROM pg_stat_replication;") | ||
| cmd.Env = append(os.Environ(), fmt.Sprintf("LD_LIBRARY_PATH=%s", libDir)) | ||
| output, err := cmd.CombinedOutput() | ||
| if err != nil { | ||
| t.Fatalf("replication check failed: %s: %v", string(output), err) | ||
| } | ||
|
|
||
| // Verify replicas are in recovery | ||
| for _, port := range []int{15501, 15502} { | ||
| cmd := exec.Command(psql, "-h", "127.0.0.1", "-p", fmt.Sprintf("%d", port), "-U", "postgres", "-t", "-c", | ||
| "SELECT pg_is_in_recovery();") | ||
| cmd.Env = append(os.Environ(), fmt.Sprintf("LD_LIBRARY_PATH=%s", libDir)) | ||
| output, err := cmd.CombinedOutput() | ||
| if err != nil { | ||
| t.Fatalf("recovery check on port %d failed: %s: %v", port, string(output), err) | ||
| } | ||
| } |
There was a problem hiding this comment.
Test executes queries but doesn't verify results.
The replication checks run SQL queries and capture output, but never assert the expected values. The test passes as long as psql exits successfully, even if replication isn't actually working.
Consider parsing and asserting:
pg_stat_replicationcount equals 2 (the number of replicas)pg_is_in_recovery()returnst(true) on each replica
output, err := cmd.CombinedOutput()
if err != nil {
t.Fatalf("replication check failed: %s: %v", string(output), err)
}
+ // Verify we have 2 connected replicas
+ if !strings.Contains(string(output), "2") {
+ t.Errorf("expected 2 replicas in pg_stat_replication, got: %s", strings.TrimSpace(string(output)))
+ }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@providers/postgresql/integration_test.go` around lines 142 - 159, The test
currently runs psql commands and only checks for execution errors but does not
assert the returned values; update the replication check (the exec.Command that
runs "SELECT count(*) FROM pg_stat_replication;") to parse and trim
cmd.CombinedOutput() into an integer and use t.Fatalf/t.Fatalf-like assertion to
require the count == 2, and update the per-replica loop (the exec.Command that
runs "SELECT pg_is_in_recovery();") to parse the output string (trim
whitespace/newlines) and assert it equals "t" (or the expected true
representation) for each port, failing the test with a clear message if the
values do not match.
0a55662 to
3d13adc
Compare
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (4)
cmd/replication.go (1)
145-149:⚠️ Potential issue | 🟡 MinorUnchecked
os.WriteFileerrors for monitoring scripts.Lines 146 and 149 ignore errors from
os.WriteFile. A permission or disk failure could leave the topology without its monitoring scripts, causing silent operational issues.🐛 Proposed fix
checkReplScript := postgresql.GenerateCheckReplicationScript(scriptOpts) - os.WriteFile(path.Join(topologyDir, "check_replication"), []byte(checkReplScript), 0755) + if err := os.WriteFile(path.Join(topologyDir, "check_replication"), []byte(checkReplScript), 0755); err != nil { + common.Exitf(1, "error writing check_replication script: %s", err) + } checkRecovScript := postgresql.GenerateCheckRecoveryScript(scriptOpts, replicaPorts) - os.WriteFile(path.Join(topologyDir, "check_recovery"), []byte(checkRecovScript), 0755) + if err := os.WriteFile(path.Join(topologyDir, "check_recovery"), []byte(checkRecovScript), 0755); err != nil { + common.Exitf(1, "error writing check_recovery script: %s", err) + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cmd/replication.go` around lines 145 - 149, The two os.WriteFile calls that write check_replication and check_recovery currently ignore errors; update the code after calling postgresql.GenerateCheckReplicationScript (checkReplScript) and postgresql.GenerateCheckRecoveryScript (checkRecovScript) to capture the returned error from os.WriteFile(path.Join(topologyDir, "..."), ...) and handle it (e.g., return the error up the call stack or log via the command's logger and exit) so permission/disk failures are not silently ignored; ensure the error handling provides context mentioning the target filename and uses the surrounding function's error return path.providers/postgresql/replication.go (1)
65-80:⚠️ Potential issue | 🟠 MajorThe
clearscript will destroy replication state on replicas.This concern was raised in a previous review and remains unaddressed. The
GenerateScriptsfunction creates aclearscript that runsrm -rf <dataDir>followed byinitdb, which will deletestandby.signalandpostgresql.auto.confcreated bypg_basebackup -R.Consider either passing a role/replica flag to
GenerateScriptsto omit or modify theclearscript for replicas, or adding a prominent warning in the replica'sclearscript.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@providers/postgresql/replication.go` around lines 65 - 80, The generated "clear" script created by GenerateScripts currently unconditionally deletes the dataDir (rm -rf) which will destroy replica state; update GenerateScripts (and its ScriptOptions) to accept a role/isReplica flag (e.g., ScriptOptions.Role or ScriptOptions.IsReplica) and, when the caller indicates a replica, either omit the destructive "clear" script or replace it with a safe variant that preserves standby.signal and postgresql.auto.conf or writes a prominent warning header; then update this call site that constructs ScriptOptions in replication.go to set the flag appropriately so replicas don't get the destructive clear script (or get the warning).providers/postgresql/postgresql.go (2)
66-76:⚠️ Potential issue | 🟠 Major
FindBinaryshould validate all required binaries.This was flagged in a previous review. Currently only
postgresis checked, butCreateSandboxandCreateReplicaalso depend oninitdb,pg_ctl,psql, andpg_basebackup. A partial unpack will pass this check but fail later during deployment.🔧 Suggested fix
func (p *PostgreSQLProvider) FindBinary(version string) (string, error) { home, err := os.UserHomeDir() if err != nil { return "", fmt.Errorf("cannot determine home directory: %w", err) } - binPath := filepath.Join(home, "opt", "postgresql", version, "bin", "postgres") - if _, err := os.Stat(binPath); err != nil { - return "", fmt.Errorf("PostgreSQL binary not found at %s: %w", binPath, err) + binDir := filepath.Join(home, "opt", "postgresql", version, "bin") + for _, name := range RequiredBinaries() { + path := filepath.Join(binDir, name) + if _, err := os.Stat(path); err != nil { + return "", fmt.Errorf("required PostgreSQL binary %q not found at %s: %w", name, path, err) + } } - return binPath, nil + return filepath.Join(binDir, "postgres"), nil }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@providers/postgresql/postgresql.go` around lines 66 - 76, FindBinary currently only checks for the "postgres" executable which allows partial installs to pass; update FindBinary to validate presence of all required PostgreSQL binaries (initdb, pg_ctl, psql, pg_basebackup, postgres) and return a clear error listing any missing executables. Locate the PostgreSQLProvider.FindBinary function and change its logic to build the bin directory path (as done now with binPath), iterate over the required binary names, stat each one in that directory, collect any missing names, and if any are missing return an error mentioning which binaries are absent (so CreateSandbox and CreateReplica will fail fast with a helpful message).
22-37:⚠️ Potential issue | 🟡 MinorNegative version parts are still accepted.
ValidateVersionaccepts versions like"16.-1"becausestrconv.Atoi("-1")succeeds. This was flagged in a previous review.🛡️ Suggested fix
func (p *PostgreSQLProvider) ValidateVersion(version string) error { parts := strings.Split(version, ".") if len(parts) != 2 { return fmt.Errorf("invalid PostgreSQL version format: %q (expected major.minor, e.g. 16.13)", version) } major, err := strconv.Atoi(parts[0]) if err != nil { return fmt.Errorf("invalid PostgreSQL major version %q: %w", parts[0], err) } if major < 12 { return fmt.Errorf("PostgreSQL major version must be >= 12, got %d", major) } - if _, err := strconv.Atoi(parts[1]); err != nil { + minor, err := strconv.Atoi(parts[1]) + if err != nil { return fmt.Errorf("invalid PostgreSQL minor version %q: %w", parts[1], err) } + if minor < 0 { + return fmt.Errorf("PostgreSQL minor version cannot be negative: %d", minor) + } return nil }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@providers/postgresql/postgresql.go` around lines 22 - 37, ValidateVersion currently accepts negative numeric parts (e.g., "16.-1") because strconv.Atoi succeeds on "-1"; update the function (ValidateVersion) to reject negative values by parsing both parts into integers (major and minor) and explicitly checking that major >= 12 and minor >= 0 (or alternatively use strconv.ParseUint for non-negative parsing) before returning nil; ensure error messages include the offending part (parts[0] or parts[1]) and preserve existing fmt.Errorf wrapping for parse errors.
🧹 Nitpick comments (11)
providers/postgresql/sandbox.go (1)
78-82: Socket path not populated inSandboxInfo.The returned
SandboxInfohas an emptySocketfield, but PostgreSQL uses Unix sockets for local connections. Consider populating this with the socket path (typically<dataDir>/.s.PGSQL.<port>) to enable consistent client connection handling.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@providers/postgresql/sandbox.go` around lines 78 - 82, Populate the Socket field on the returned providers.SandboxInfo so clients can connect via the Unix socket: construct the path from config.Dir and config.Port (e.g. <dataDir>/.s.PGSQL.<port>) and set SandboxInfo.Socket to that string in the same return where Dir, Port and Status are set; update the return in the function that builds SandboxInfo to include Socket: path using config.Dir and config.Port.cmd/single.go (1)
488-494: HardcodedDbUser: "postgres"ignores user-provided credentials.The non-MySQL path hardcodes
DbUser: "postgres"(line 493) instead of reading from--db-userflag. This inconsistency with the MySQL path may confuse users who expect the flag to work across providers.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cmd/single.go` around lines 488 - 494, The SandboxConfig initialization hardcodes DbUser: "postgres" instead of using the user-provided flag; update the non-MySQL branch where providers.SandboxConfig is constructed to set DbUser from the parsed flag variable (e.g., dbUser) rather than the literal "postgres" so the --db-user value is respected (same variable the MySQL path uses).providers/provider.go (1)
91-99: Consider usingslices.Containsfrom the standard library.Go 1.21+ includes
slices.Containswhich could replace this helper. However, if backward compatibility with older Go versions is required, this implementation is acceptable.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@providers/provider.go` around lines 91 - 99, The helper function ContainsString iterates manually to check membership; replace its body with the standard library implementation by using slices.Contains from the Go 1.21+ package (import "slices") in the ContainsString function, or if you must support older Go versions, leave the current implementation but add a comment stating why slices.Contains isn't used for future clarity; ensure the function signature (ContainsString(slice []string, s string) bool) remains unchanged so callers are unaffected.providers/proxysql/config.go (1)
84-90: User entry uses monitor credentials as a regular client user.The
pgsql_users/mysql_usersblock registerscfg.MonitorUseras a client user withdefault_hostgroup=0. This conflates monitor and client roles. Consider whether a separate application user should be configured for client traffic.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@providers/proxysql/config.go` around lines 84 - 90, The current users block writes cfg.MonitorUser/cfg.MonitorPass into the client users list (usersKey), conflating monitor and client roles; change the code to register a separate client application user instead: introduce and use distinct config fields (e.g., cfg.ClientUser and cfg.ClientPass) when building the usersKey block (or skip adding the monitor user there if a client user is not provided), and ensure cfg.MonitorUser/cfg.MonitorPass remain only for monitoring logic; update any references around usersKey, cfg.MonitorUser, and cfg.MonitorPass accordingly.cmd/multiple.go (1)
92-100: No rollback on node creation/startup failure.If
CreateSandboxorStartSandboxfails for node N, previously created nodes remain running andtopologyDiris left on disk. Consider adding cleanup logic to stop and remove previously created nodes before exiting.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cmd/multiple.go` around lines 92 - 100, When CreateSandbox or StartSandbox fails for node i you must roll back previously created nodes and cleanup topologyDir: catch the error in the failure branches around p.CreateSandbox and p.StartSandbox, iterate over the earlier nodes (0..i-1) and call the provider teardown methods (e.g., p.StopSandbox and p.RemoveSandbox or the equivalent teardown functions used elsewhere) using each node's nodeDir, log but ignore per-node teardown errors, and finally remove topologyDir from disk before exiting; implement this rollback where the current error handling calls common.Exitf so resources are not left running and the topologyDir is not orphaned.providers/postgresql/postgresql.go (2)
78-84: Minor duplication withFindBinary.Both
basedirFromVersionandFindBinaryconstruct the same base path. Consider havingFindBinarycallbasedirFromVersioninternally.♻️ Small refactor to reduce duplication
func (p *PostgreSQLProvider) FindBinary(version string) (string, error) { - home, err := os.UserHomeDir() + basedir, err := basedirFromVersion(version) if err != nil { - return "", fmt.Errorf("cannot determine home directory: %w", err) + return "", err } - binPath := filepath.Join(home, "opt", "postgresql", version, "bin", "postgres") + binPath := filepath.Join(basedir, "bin", "postgres")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@providers/postgresql/postgresql.go` around lines 78 - 84, FindBinary duplicates the base-path construction done in basedirFromVersion; refactor FindBinary to call basedirFromVersion(version) and use its returned path instead of rebuilding filepath.Join(os.UserHomeDir(), "opt", "postgresql", version) inline. Preserve and propagate any errors from basedirFromVersion (e.g., return or wrap them as FindBinary currently does) and keep the same behavior for locating the binary after obtaining the base dir.
50-63:VersionToPortshould share validation logic withValidateVersion.Both functions parse
major.minorindependently, which can lead to drift. Also,VersionToPortdoesn't reject negative values, which could produce unexpected port numbers.♻️ Suggested refactor to share parsing
+func parseVersion(version string) (major, minor int, err error) { + parts := strings.Split(version, ".") + if len(parts) != 2 { + return 0, 0, fmt.Errorf("invalid version format: %q (expected major.minor)", version) + } + major, err = strconv.Atoi(parts[0]) + if err != nil { + return 0, 0, fmt.Errorf("invalid major version %q: %w", parts[0], err) + } + minor, err = strconv.Atoi(parts[1]) + if err != nil { + return 0, 0, fmt.Errorf("invalid minor version %q: %w", parts[1], err) + } + if major < 0 || minor < 0 { + return 0, 0, fmt.Errorf("version parts cannot be negative: %d.%d", major, minor) + } + return major, minor, nil +} + func VersionToPort(version string) (int, error) { - parts := strings.Split(version, ".") - if len(parts) != 2 { - return 0, fmt.Errorf("invalid version format: %q", version) - } - major, err := strconv.Atoi(parts[0]) - if err != nil { - return 0, err - } - minor, err := strconv.Atoi(parts[1]) + major, minor, err := parseVersion(version) if err != nil { return 0, err } return 15000 + major*100 + minor, nil }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@providers/postgresql/postgresql.go` around lines 50 - 63, VersionToPort duplicates parsing/validation done in ValidateVersion and currently permits negative major/minor values; refactor by extracting the shared parsing/validation into a single helper (e.g., parseVersion or reuse ValidateVersion to return parsed ints) and have VersionToPort call that helper to obtain major and minor, ensure the helper rejects invalid formats and negative values, then compute and return 15000 + major*100 + minor; update references to use the new helper so parsing logic is centralized.providers/postgresql/replication.go (2)
60-63: Gosec G306: consider whether 0644 permissions are acceptable.The linter flags
0644as too permissive. However,postgresql.confneeds to be readable by the PostgreSQL process and doesn't contain secrets by default in this context (auth=trust mode). If this config could contain sensitive data in future use cases, tighten to0600.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@providers/postgresql/replication.go` around lines 60 - 63, The file write for postgresql.conf uses os.WriteFile(confPath, ... , 0644) which the linter flags as too permissive; update the mode to 0600 to restrict permissions (ensure the PostgreSQL process can still read the file in your deployment), i.e. change the permission argument in the os.WriteFile call that writes confPath (the block that builds newLines and handles os.RemoveAll(config.Dir) on error) from 0644 to 0600.
13-36: Consider validating that the primary is running beforepg_basebackup.If the CLI's
--skip-startflag is passed, the primary won't be running andpg_basebackupwill fail with an opaque connection error. Adding an explicit check would provide a clearer error message.From
cmd/replication.go:86-95, thedeployReplicationNonMySQLfunction starts the primary only when!skipStart, butCreateReplicaassumes the primary is running.💡 Suggested validation
func (p *PostgreSQLProvider) CreateReplica(primary providers.SandboxInfo, config providers.SandboxConfig) (*providers.SandboxInfo, error) { + if primary.Status != "running" { + return nil, fmt.Errorf("primary must be running for pg_basebackup (status: %s)", primary.Status) + } basedir, err := p.resolveBasedir(config)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@providers/postgresql/replication.go` around lines 13 - 36, CreateReplica currently assumes the primary is running before invoking pg_basebackup (pgBasebackup, bbCmd) so add an explicit readiness check using primary.Port (and host 127.0.0.1) before running bbCmd: e.g. perform a short TCP dial with net.DialTimeout to 127.0.0.1:fmt.Sprintf("%d", primary.Port) (or run the pg_isready binary from binDir) and if it fails return a clear error like "primary not accepting connections on port X; start the primary or remove --skip-start" instead of relying on pg_basebackup's opaque error; keep this check immediately before constructing/executing bbCmd in CreateReplica.providers/postgresql/unpack.go (2)
84-92: Silently skipping missing source directories may mask extraction issues.If a required directory (e.g.,
srcBin) doesn't exist after extraction, the code silently continues. This could lead to confusing errors later whenRequiredBinaries()validation fails. Consider logging a warning or distinguishing between expected optional directories and critical ones.💡 Consider failing early for critical directories
for _, c := range copies { if _, err := os.Stat(c.src); os.IsNotExist(err) { + // bin directory is required; lib and share may be optional + if c.src == srcBin { + return fmt.Errorf("required bin directory not found at %s after extraction", c.src) + } continue }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@providers/postgresql/unpack.go` around lines 84 - 92, The loop that copies directories over the `copies` slice currently skips missing sources silently; update the logic in the unpack function that iterates `copies` so missing critical source directories (e.g., `srcBin`, any entries required by `RequiredBinaries()`) cause an early failure with a descriptive error, while truly optional directories are only warned about via logging; to implement this, either add an "optional"/"required" flag to each `copies` entry or maintain a small set of critical names and, when os.Stat(c.src) yields IsNotExist, call processLogger.Warnf (or similar) for optional ones and return fmt.Errorf(...) for required ones so the error is surfaced before later `RequiredBinaries()` validation fails.
23-39: Classification depends on check order—document or reorder.The logic relies on checking
postgresql-client-beforepostgresql-since client packages match both prefixes. While this works correctly, theelse ifwithstrings.HasSuffix(base, ".deb")on the server branch is missing from the client branch, creating a minor inconsistency.♻️ Suggested consistency fix
func ClassifyDebs(files []string) (server, client string, err error) { for _, f := range files { base := filepath.Base(f) - if strings.HasPrefix(base, "postgresql-client-") { + if strings.HasPrefix(base, "postgresql-client-") && strings.HasSuffix(base, ".deb") { client = f } else if strings.HasPrefix(base, "postgresql-") && strings.HasSuffix(base, ".deb") { server = f } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@providers/postgresql/unpack.go` around lines 23 - 39, In ClassifyDebs, the client check currently lacks the ".deb" suffix guard and relies on order; update the client branch condition to mirror the server branch by requiring strings.HasPrefix(base, "postgresql-client-") && strings.HasSuffix(base, ".deb") (or alternatively swap to check server first and document the order), ensuring both branches consistently validate the ".deb" suffix so classification is deterministic and consistent.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@cmd/multiple.go`:
- Around line 33-36: The code accesses args[0] in deployMultipleNonMySQL without
validating args length, so add an args validation (e.g., check len(args) > 0 or
use cobra.MinimumNArgs) at the start of deployMultipleNonMySQL before
referencing args[0]; additionally move the common.CheckOrigin(args) call inside
multipleSandbox to run before the provider check/early return so the origin
validation is not skipped, and update any early returns to occur only after
validation passes. Ensure you touch the deployMultipleNonMySQL function and
multipleSandbox function names mentioned so the fix is applied where args[0] and
common.CheckOrigin are used.
In `@cmd/replication.go`:
- Around line 121-127: When replica creation fails inside the loop, we currently
call p.StopSandbox(primaryDir) and p.StopSandbox(path.Join(topologyDir,
fmt.Sprintf("replica%d", j))) but do not remove topologyDir and ignore
StopSandbox errors; add os.RemoveAll(topologyDir) after stopping all sandboxes
to ensure subsequent retries won't hit the existing-dir check, and check/handle
(or at least log) the returned errors from p.StopSandbox calls so they are not
silently ignored before calling common.Exitf to fail; locate the cleanup in the
replica-creation error branch (references: topologyDir, p.StopSandbox,
os.RemoveAll, common.Exitf) and update it accordingly.
---
Duplicate comments:
In `@cmd/replication.go`:
- Around line 145-149: The two os.WriteFile calls that write check_replication
and check_recovery currently ignore errors; update the code after calling
postgresql.GenerateCheckReplicationScript (checkReplScript) and
postgresql.GenerateCheckRecoveryScript (checkRecovScript) to capture the
returned error from os.WriteFile(path.Join(topologyDir, "..."), ...) and handle
it (e.g., return the error up the call stack or log via the command's logger and
exit) so permission/disk failures are not silently ignored; ensure the error
handling provides context mentioning the target filename and uses the
surrounding function's error return path.
In `@providers/postgresql/postgresql.go`:
- Around line 66-76: FindBinary currently only checks for the "postgres"
executable which allows partial installs to pass; update FindBinary to validate
presence of all required PostgreSQL binaries (initdb, pg_ctl, psql,
pg_basebackup, postgres) and return a clear error listing any missing
executables. Locate the PostgreSQLProvider.FindBinary function and change its
logic to build the bin directory path (as done now with binPath), iterate over
the required binary names, stat each one in that directory, collect any missing
names, and if any are missing return an error mentioning which binaries are
absent (so CreateSandbox and CreateReplica will fail fast with a helpful
message).
- Around line 22-37: ValidateVersion currently accepts negative numeric parts
(e.g., "16.-1") because strconv.Atoi succeeds on "-1"; update the function
(ValidateVersion) to reject negative values by parsing both parts into integers
(major and minor) and explicitly checking that major >= 12 and minor >= 0 (or
alternatively use strconv.ParseUint for non-negative parsing) before returning
nil; ensure error messages include the offending part (parts[0] or parts[1]) and
preserve existing fmt.Errorf wrapping for parse errors.
In `@providers/postgresql/replication.go`:
- Around line 65-80: The generated "clear" script created by GenerateScripts
currently unconditionally deletes the dataDir (rm -rf) which will destroy
replica state; update GenerateScripts (and its ScriptOptions) to accept a
role/isReplica flag (e.g., ScriptOptions.Role or ScriptOptions.IsReplica) and,
when the caller indicates a replica, either omit the destructive "clear" script
or replace it with a safe variant that preserves standby.signal and
postgresql.auto.conf or writes a prominent warning header; then update this call
site that constructs ScriptOptions in replication.go to set the flag
appropriately so replicas don't get the destructive clear script (or get the
warning).
---
Nitpick comments:
In `@cmd/multiple.go`:
- Around line 92-100: When CreateSandbox or StartSandbox fails for node i you
must roll back previously created nodes and cleanup topologyDir: catch the error
in the failure branches around p.CreateSandbox and p.StartSandbox, iterate over
the earlier nodes (0..i-1) and call the provider teardown methods (e.g.,
p.StopSandbox and p.RemoveSandbox or the equivalent teardown functions used
elsewhere) using each node's nodeDir, log but ignore per-node teardown errors,
and finally remove topologyDir from disk before exiting; implement this rollback
where the current error handling calls common.Exitf so resources are not left
running and the topologyDir is not orphaned.
In `@cmd/single.go`:
- Around line 488-494: The SandboxConfig initialization hardcodes DbUser:
"postgres" instead of using the user-provided flag; update the non-MySQL branch
where providers.SandboxConfig is constructed to set DbUser from the parsed flag
variable (e.g., dbUser) rather than the literal "postgres" so the --db-user
value is respected (same variable the MySQL path uses).
In `@providers/postgresql/postgresql.go`:
- Around line 78-84: FindBinary duplicates the base-path construction done in
basedirFromVersion; refactor FindBinary to call basedirFromVersion(version) and
use its returned path instead of rebuilding filepath.Join(os.UserHomeDir(),
"opt", "postgresql", version) inline. Preserve and propagate any errors from
basedirFromVersion (e.g., return or wrap them as FindBinary currently does) and
keep the same behavior for locating the binary after obtaining the base dir.
- Around line 50-63: VersionToPort duplicates parsing/validation done in
ValidateVersion and currently permits negative major/minor values; refactor by
extracting the shared parsing/validation into a single helper (e.g.,
parseVersion or reuse ValidateVersion to return parsed ints) and have
VersionToPort call that helper to obtain major and minor, ensure the helper
rejects invalid formats and negative values, then compute and return 15000 +
major*100 + minor; update references to use the new helper so parsing logic is
centralized.
In `@providers/postgresql/replication.go`:
- Around line 60-63: The file write for postgresql.conf uses
os.WriteFile(confPath, ... , 0644) which the linter flags as too permissive;
update the mode to 0600 to restrict permissions (ensure the PostgreSQL process
can still read the file in your deployment), i.e. change the permission argument
in the os.WriteFile call that writes confPath (the block that builds newLines
and handles os.RemoveAll(config.Dir) on error) from 0644 to 0600.
- Around line 13-36: CreateReplica currently assumes the primary is running
before invoking pg_basebackup (pgBasebackup, bbCmd) so add an explicit readiness
check using primary.Port (and host 127.0.0.1) before running bbCmd: e.g. perform
a short TCP dial with net.DialTimeout to 127.0.0.1:fmt.Sprintf("%d",
primary.Port) (or run the pg_isready binary from binDir) and if it fails return
a clear error like "primary not accepting connections on port X; start the
primary or remove --skip-start" instead of relying on pg_basebackup's opaque
error; keep this check immediately before constructing/executing bbCmd in
CreateReplica.
In `@providers/postgresql/sandbox.go`:
- Around line 78-82: Populate the Socket field on the returned
providers.SandboxInfo so clients can connect via the Unix socket: construct the
path from config.Dir and config.Port (e.g. <dataDir>/.s.PGSQL.<port>) and set
SandboxInfo.Socket to that string in the same return where Dir, Port and Status
are set; update the return in the function that builds SandboxInfo to include
Socket: path using config.Dir and config.Port.
In `@providers/postgresql/unpack.go`:
- Around line 84-92: The loop that copies directories over the `copies` slice
currently skips missing sources silently; update the logic in the unpack
function that iterates `copies` so missing critical source directories (e.g.,
`srcBin`, any entries required by `RequiredBinaries()`) cause an early failure
with a descriptive error, while truly optional directories are only warned about
via logging; to implement this, either add an "optional"/"required" flag to each
`copies` entry or maintain a small set of critical names and, when
os.Stat(c.src) yields IsNotExist, call processLogger.Warnf (or similar) for
optional ones and return fmt.Errorf(...) for required ones so the error is
surfaced before later `RequiredBinaries()` validation fails.
- Around line 23-39: In ClassifyDebs, the client check currently lacks the
".deb" suffix guard and relies on order; update the client branch condition to
mirror the server branch by requiring strings.HasPrefix(base,
"postgresql-client-") && strings.HasSuffix(base, ".deb") (or alternatively swap
to check server first and document the order), ensuring both branches
consistently validate the ".deb" suffix so classification is deterministic and
consistent.
In `@providers/provider.go`:
- Around line 91-99: The helper function ContainsString iterates manually to
check membership; replace its body with the standard library implementation by
using slices.Contains from the Go 1.21+ package (import "slices") in the
ContainsString function, or if you must support older Go versions, leave the
current implementation but add a comment stating why slices.Contains isn't used
for future clarity; ensure the function signature (ContainsString(slice
[]string, s string) bool) remains unchanged so callers are unaffected.
In `@providers/proxysql/config.go`:
- Around line 84-90: The current users block writes
cfg.MonitorUser/cfg.MonitorPass into the client users list (usersKey),
conflating monitor and client roles; change the code to register a separate
client application user instead: introduce and use distinct config fields (e.g.,
cfg.ClientUser and cfg.ClientPass) when building the usersKey block (or skip
adding the monitor user there if a client user is not provided), and ensure
cfg.MonitorUser/cfg.MonitorPass remain only for monitoring logic; update any
references around usersKey, cfg.MonitorUser, and cfg.MonitorPass accordingly.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 3b3f87ee-e2a7-4738-a3e9-f04fecea749a
📒 Files selected for processing (25)
cmd/deploy_postgresql.gocmd/export_test.gocmd/multiple.gocmd/replication.gocmd/root.gocmd/single.gocmd/unpack.goglobals/globals.goproviders/mysql/mysql.goproviders/postgresql/config.goproviders/postgresql/config_test.goproviders/postgresql/integration_test.goproviders/postgresql/postgresql.goproviders/postgresql/postgresql_test.goproviders/postgresql/replication.goproviders/postgresql/sandbox.goproviders/postgresql/scripts.goproviders/postgresql/unpack.goproviders/postgresql/unpack_test.goproviders/provider.goproviders/provider_test.goproviders/proxysql/config.goproviders/proxysql/config_test.goproviders/proxysql/proxysql.gosandbox/proxysql_topology.go
✅ Files skipped from review due to trivial changes (5)
- providers/proxysql/config_test.go
- providers/postgresql/config_test.go
- providers/postgresql/unpack_test.go
- providers/postgresql/postgresql_test.go
- providers/postgresql/config.go
🚧 Files skipped from review as they are similar to previous changes (9)
- cmd/export_test.go
- sandbox/proxysql_topology.go
- cmd/root.go
- providers/mysql/mysql.go
- providers/postgresql/integration_test.go
- providers/proxysql/proxysql.go
- providers/provider_test.go
- cmd/deploy_postgresql.go
- providers/postgresql/scripts.go
| func deployMultipleNonMySQL(cmd *cobra.Command, args []string, providerName string) { | ||
| flags := cmd.Flags() | ||
| version := args[0] | ||
| nodes, _ := flags.GetInt(globals.NodesLabel) |
There was a problem hiding this comment.
Missing argument validation before accessing args[0].
Same pattern as other deploy commands: args[0] is accessed at line 35 without validation. The common.CheckOrigin(args) at line 118 is bypassed by the early return at line 114.
Move the validation before the provider check in multipleSandbox:
func multipleSandbox(cmd *cobra.Command, args []string) {
flags := cmd.Flags()
+ common.CheckOrigin(args)
providerName, _ := flags.GetString(globals.ProviderLabel)
if providerName != "mysql" {
deployMultipleNonMySQL(cmd, args, providerName)
return
}
-
- common.CheckOrigin(args)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cmd/multiple.go` around lines 33 - 36, The code accesses args[0] in
deployMultipleNonMySQL without validating args length, so add an args validation
(e.g., check len(args) > 0 or use cobra.MinimumNArgs) at the start of
deployMultipleNonMySQL before referencing args[0]; additionally move the
common.CheckOrigin(args) call inside multipleSandbox to run before the provider
check/early return so the origin validation is not skipped, and update any early
returns to occur only after validation passes. Ensure you touch the
deployMultipleNonMySQL function and multipleSandbox function names mentioned so
the fix is applied where args[0] and common.CheckOrigin are used.
| // Cleanup on failure | ||
| p.StopSandbox(primaryDir) | ||
| for j := 1; j < i; j++ { | ||
| p.StopSandbox(path.Join(topologyDir, fmt.Sprintf("replica%d", j))) | ||
| } | ||
| common.Exitf(1, "error creating replica %d: %s", i, err) | ||
| } |
There was a problem hiding this comment.
Cleanup on failure doesn't remove the topology directory.
When replica creation fails, lines 122-126 stop sandboxes but don't remove topologyDir. The next retry will fail on line 68's existing-dir check. Consider adding os.RemoveAll(topologyDir) after stopping sandboxes.
Also, the p.StopSandbox calls at lines 122 and 124 have unchecked error returns (flagged by linter).
🐛 Proposed fix
if _, err := p.CreateReplica(primaryInfo, replicaConfig); err != nil {
// Cleanup on failure
- p.StopSandbox(primaryDir)
+ _ = p.StopSandbox(primaryDir)
for j := 1; j < i; j++ {
- p.StopSandbox(path.Join(topologyDir, fmt.Sprintf("replica%d", j)))
+ _ = p.StopSandbox(path.Join(topologyDir, fmt.Sprintf("replica%d", j)))
}
+ os.RemoveAll(topologyDir)
common.Exitf(1, "error creating replica %d: %s", i, err)
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Cleanup on failure | |
| p.StopSandbox(primaryDir) | |
| for j := 1; j < i; j++ { | |
| p.StopSandbox(path.Join(topologyDir, fmt.Sprintf("replica%d", j))) | |
| } | |
| common.Exitf(1, "error creating replica %d: %s", i, err) | |
| } | |
| // Cleanup on failure | |
| _ = p.StopSandbox(primaryDir) | |
| for j := 1; j < i; j++ { | |
| _ = p.StopSandbox(path.Join(topologyDir, fmt.Sprintf("replica%d", j))) | |
| } | |
| os.RemoveAll(topologyDir) | |
| common.Exitf(1, "error creating replica %d: %s", i, err) | |
| } |
🧰 Tools
🪛 GitHub Check: Lint
[failure] 124-124:
Error return value of p.StopSandbox is not checked (errcheck)
[failure] 122-122:
Error return value of p.StopSandbox is not checked (errcheck)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cmd/replication.go` around lines 121 - 127, When replica creation fails
inside the loop, we currently call p.StopSandbox(primaryDir) and
p.StopSandbox(path.Join(topologyDir, fmt.Sprintf("replica%d", j))) but do not
remove topologyDir and ignore StopSandbox errors; add os.RemoveAll(topologyDir)
after stopping all sandboxes to ensure subsequent retries won't hit the
existing-dir check, and check/handle (or at least log) the returned errors from
p.StopSandbox calls so they are not silently ignored before calling common.Exitf
to fail; locate the cleanup in the replica-creation error branch (references:
topologyDir, p.StopSandbox, os.RemoveAll, common.Exitf) and update it
accordingly.
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@providers/postgresql/sandbox.go`:
- Around line 45-48: The code currently hardcodes ListenAddresses="127.0.0.1"
when constructing PostgresqlConfOptions, ignoring the caller-provided bind
address (config.Host / SandboxConfig.Host). Change the construction to honor
config.Host: if config.Host is non-empty use that as ListenAddresses, otherwise
fall back to "127.0.0.1"; alternatively, validate config.Host and return an
error if unsupported values are provided. Update the call site that builds
PostgresqlConfOptions (the GeneratePostgresqlConf invocation) to reference
config.Host and add validation logic around it.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: e5300fe5-84c1-46e5-8122-b1b204acd78b
📒 Files selected for processing (1)
providers/postgresql/sandbox.go
| pgConf := GeneratePostgresqlConf(PostgresqlConfOptions{ | ||
| Port: config.Port, | ||
| ListenAddresses: "127.0.0.1", | ||
| UnixSocketDir: dataDir, |
There was a problem hiding this comment.
SandboxConfig.Host is currently ignored in server bind configuration.
Line 47 hardcodes ListenAddresses to 127.0.0.1, so caller-provided bind address (config.Host) is silently dropped. Please either honor config.Host (with localhost fallback) or explicitly reject unsupported values.
Suggested fix
+ listenAddr := config.Host
+ if listenAddr == "" {
+ listenAddr = "127.0.0.1"
+ }
pgConf := GeneratePostgresqlConf(PostgresqlConfOptions{
Port: config.Port,
- ListenAddresses: "127.0.0.1",
+ ListenAddresses: listenAddr,
UnixSocketDir: dataDir,
LogDir: logDir,
Replication: replication,
})🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@providers/postgresql/sandbox.go` around lines 45 - 48, The code currently
hardcodes ListenAddresses="127.0.0.1" when constructing PostgresqlConfOptions,
ignoring the caller-provided bind address (config.Host / SandboxConfig.Host).
Change the construction to honor config.Host: if config.Host is non-empty use
that as ListenAddresses, otherwise fall back to "127.0.0.1"; alternatively,
validate config.Host and return an error if unsupported values are provided.
Update the call site that builds PostgresqlConfOptions (the
GeneratePostgresqlConf invocation) to reference config.Host and add validation
logic around it.
ce1957e to
71498b3
Compare
…plica Add ErrNotSupported sentinel error and two new methods to the Provider interface. MySQL and ProxySQL providers return ErrNotSupported from CreateReplica; PostgreSQL (upcoming) will be the first real user.
Implements the PostgreSQL provider skeleton with Name, ValidateVersion, DefaultPorts, SupportedTopologies, VersionToPort, FindBinary, StartSandbox, StopSandbox, and stub CreateSandbox/CreateReplica. Includes full test coverage; CreateSandbox and CreateReplica will be filled in Tasks 4 and 6 respectively.
… lifecycle scripts
Implements ParseDebVersion, ClassifyDebs, RequiredBinaries, and UnpackDebs to extract PostgreSQL server and client .deb packages into a structured target directory with bin/, lib/, and share/ subdirectories.
Register the PostgreSQL provider in root.go and add --provider flag to single, multiple, and replication commands. Non-MySQL providers bypass fillSandboxDefinition and route to dedicated deploy functions. The DeployProxySQLForTopology function gains a backendProvider parameter to support future cross-database ProxySQL wiring.
…rs config) Add BackendProvider field to ProxySQLConfig; GenerateConfig now branches on backend_provider="postgresql" to emit pgsql_variables/pgsql_servers/pgsql_users instead of the mysql_* equivalents. use_proxy script uses psql for PostgreSQL backends. Tests cover both MySQL and PostgreSQL config generation paths.
Add tests for ContainsString, CompatibleAddons, and SupportedTopologies to verify provider compatibility constraints across database types.
Adds the deploy postgresql subcommand following the deploy proxysql pattern. Supports version validation, binary discovery, port allocation, sandbox creation, and optional skip-start flag.
Add build-tagged integration tests for single sandbox and replication topologies. Tests require real PostgreSQL binaries in ~/opt/postgresql/ and are excluded from normal go test runs via //go:build integration.
The TestExportImport test checks the command tree structure. Adding the 'deploy postgresql' subcommand increased deploy's subcommand count from 4 to 5.
Deb-packaged initdb looks for share data at its compiled --prefix (/usr/share/postgresql/), not relative to the binary. Use -L to explicitly point to our extracted share directory.
Deb-built postgres looks for share data at ../share/postgresql/<major>/ relative to its binary. Copy extracted share files to both share/ and share/postgresql/<major>/ so both initdb and postgres can find them.
71498b3 to
cadfe55
Compare
Summary
initdb,pg_ctl,psql, streaming replication viapg_basebackupdbdeployer unpack --provider=postgresql <server.deb> <client.deb>--providerflag ondeploy single/multiple/replicationcommands for provider routingdbdeployer deploy postgresql <version>standalone command--with-proxysqlgeneratespgsql_servers/pgsql_users/pgsql_variablesconfig blocks--flavorrejected for non-MySQL providers, addon compatibility validationSupportedTopologies()andCreateReplica()— MySQL and ProxySQL providers updatedNew Files
providers/postgresql/— 6 implementation files + 3 test filescmd/deploy_postgresql.go— standalone deploy commandTest Plan
providers/,providers/mysql/,providers/postgresql/,providers/proxysql/cmd/tests pass including updatedTestExportImportgo build -o /dev/null .)go test ./providers/postgresql/ -tags integration -vdbdeployer unpack --provider=postgresql postgresql-16_*.deb postgresql-client-16_*.debdbdeployer deploy postgresql 16.13dbdeployer deploy replication 16.13 --provider=postgresqldbdeployer deploy replication 16.13 --provider=postgresql --with-proxysqlRelated Issues
Design Spec
docs/superpowers/specs/2026-03-24-phase3-postgresql-provider-design.mdSummary by CodeRabbit
New Features
Tests