OCPEDGE-2381: Validate no WAL corruption when both nodes shutdown gracefully#30925
OCPEDGE-2381: Validate no WAL corruption when both nodes shutdown gracefully#30925kasturinarra wants to merge 1 commit intoopenshift:mainfrom
Conversation
|
Pipeline controller notification For optional jobs, comment This repository is configured in: automatic mode |
|
@kasturinarra: This pull request references OCPEDGE-2381 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.22.0" version, but no target version was set. DetailsIn response to this: Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
WalkthroughA new test case is added to the etcd recovery suite that triggers graceful reboots on both nodes, waits 90 seconds, validates etcd recovery state with expected member roles, and verifies the Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
test/extended/two_node/tnf_recovery.go (1)
413-435: Avoid a second copy of this suite block.This repeats the
Describe/BeforeEachabove, and the copy has already drifted by dropping[OCPFeatureGate:DualReplica]. Please add the newItto the existing suite instead so markers and setup stay in one place.As per coding guidelines, "Focus on major issues impacting performance, readability, maintainability and security. Avoid nitpicks and avoid verbosity."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/extended/two_node/tnf_recovery.go` around lines 413 - 435, There is a duplicated Describe/BeforeEach block for the "[sig-etcd] Two Node with Fencing etcd recovery" suite; remove the second Describe block and instead add the new test It into the original suite so setup and markers (including the [OCPFeatureGate:DualReplica] tag) remain intact. Locate the duplicate Describe(...) block that defines oc, etcdClientFactory, peerNode, targetNode and its BeforeEach, delete that duplicated Describe/BeforeEach, and move the new It test body into the existing Describe that already declares oc and uses BeforeEach/etcdClientFactory so the markers and shared setup are not dropped. Ensure the moved It references the same oc, etcdClientFactory, peerNode and targetNode variables and keep all original markers on the suite.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@test/extended/two_node/tnf_recovery.go`:
- Around line 449-463: The test currently calls
validateEtcdRecoveryState(targetNode, peerNode, ...) without first proving the
scheduled reboots actually happened; capture each node's pre-disruption boot-id
(e.g., via exutil.DebugNodeRetryWithOptionsAndChroot calling "cat
/proc/sys/kernel/random/boot_id") before issuing "shutdown -r 1", then poll both
nodes after the disruption and wait until their boot-id values have changed
(with a timeout) before calling validateEtcdRecoveryState; place this boot-id
capture and change-check around the existing shutdown/recovery logic and use the
same targetNode and peerNode identifiers so the test fails if a node never
rebooted.
---
Nitpick comments:
In `@test/extended/two_node/tnf_recovery.go`:
- Around line 413-435: There is a duplicated Describe/BeforeEach block for the
"[sig-etcd] Two Node with Fencing etcd recovery" suite; remove the second
Describe block and instead add the new test It into the original suite so setup
and markers (including the [OCPFeatureGate:DualReplica] tag) remain intact.
Locate the duplicate Describe(...) block that defines oc, etcdClientFactory,
peerNode, targetNode and its BeforeEach, delete that duplicated
Describe/BeforeEach, and move the new It test body into the existing Describe
that already declares oc and uses BeforeEach/etcdClientFactory so the markers
and shared setup are not dropped. Ensure the moved It references the same oc,
etcdClientFactory, peerNode and targetNode variables and keep all original
markers on the suite.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 1c865daf-4792-4b5c-b9f9-dcd740484608
📒 Files selected for processing (1)
test/extended/two_node/tnf_recovery.go
|
Scheduling required tests: |
|
/retest |
|
/payload-job periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ovn-two-node-fencing-dualstack-recovery-techpreview |
|
@kasturinarra: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command
See details on https://pr-payload-tests.ci.openshift.org/runs/ci/c7549270-27ad-11f1-9069-e5b6d1c3e863-0 |
fonta-rh
left a comment
There was a problem hiding this comment.
Review: Minor structural suggestion
The test itself looks good — clean structure, correct use of validateEtcdRecoveryState, and the podman container check is a nice addition.
One suggestion: consider moving the It block into the existing Describe block (line 73) rather than creating a new one. The new Describe at line 413 duplicates the var declarations and BeforeEach setup, and drops the [OCPFeatureGate:DualReplica] label that every other test in this file carries.
Moving it in would:
- Restore the
[OCPFeatureGate:DualReplica]label (used by promotion tracking tooling) - Eliminate ~25 lines of duplicated setup code
- Follow the same pattern as the other non-hypervisor tests (graceful, ungraceful, network disruption) which coexist in that block alongside
[Requires:HypervisorSSHConfig]tests
Everything else (the 90s sleep, the sequential reboot triggering, the final-state-only validation) follows the established patterns in this file.
|
/hold to review comments |
|
/lgtm |
011bd7f to
ee9cd82
Compare
There was a problem hiding this comment.
♻️ Duplicate comments (1)
test/extended/two_node/tnf_recovery.go (1)
427-431:⚠️ Potential issue | 🟠 MajorEnsure both node reboots are observed before accepting recovery
Line 428 validates a steady-state membership shape (
started && !learner) that can already be true before disruption. If a scheduled reboot is skipped/delayed, this test can still pass without exercising the double-reboot/WAL-recovery path.Suggested fix
g.By("Waiting for graceful shutdown to take effect (shutdown -r 1 schedules reboot in 1 minute)") time.Sleep(90 * time.Second) + g.By("Waiting for both nodes to report a reboot") + o.Eventually(func() error { + for _, node := range []*corev1.Node{&targetNode, &peerNode} { + rebooted, err := utils.HasNodeRebooted(oc, node) + if err != nil { + return err + } + if !rebooted { + return fmt.Errorf("node %s has not rebooted yet", node.Name) + } + } + return nil + }, membersHealthyAfterDoubleReboot, utils.FiveSecondPollInterval).ShouldNot(o.HaveOccurred()) + g.By(fmt.Sprintf("Waiting for both etcd members to become healthy (timeout: %v)", membersHealthyAfterDoubleReboot)) validateEtcdRecoveryState(oc, etcdClientFactory, &targetNode,As per coding guidelines, "
**: Focus on major issues impacting performance, readability, maintainability and security. Avoid nitpicks and avoid verbosity."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/extended/two_node/tnf_recovery.go` around lines 427 - 431, Before calling validateEtcdRecoveryState, explicitly wait and assert that both reboots were actually observed for targetNode and peerNode (do not rely on the steady-state check alone); add a pre-check that polls until each node shows the expected reboot marker (e.g., incremented reboot count or the node/machine status annotation you use to detect a reboot) with the same membersHealthyAfterDoubleReboot timeout and utils.FiveSecondPollInterval, and only then call validateEtcdRecoveryState(targetNode, peerNode, ...); this ensures a skipped/delayed reboot cannot make the test pass.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@test/extended/two_node/tnf_recovery.go`:
- Around line 427-431: Before calling validateEtcdRecoveryState, explicitly wait
and assert that both reboots were actually observed for targetNode and peerNode
(do not rely on the steady-state check alone); add a pre-check that polls until
each node shows the expected reboot marker (e.g., incremented reboot count or
the node/machine status annotation you use to detect a reboot) with the same
membersHealthyAfterDoubleReboot timeout and utils.FiveSecondPollInterval, and
only then call validateEtcdRecoveryState(targetNode, peerNode, ...); this
ensures a skipped/delayed reboot cannot make the test pass.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 698c637b-2491-432b-9358-322ee4c957c2
📒 Files selected for processing (1)
test/extended/two_node/tnf_recovery.go
|
Scheduling required tests: |
|
/lgtm |
|
/approve |
|
@kasturinarra: all tests passed! Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
|
/payload-job periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ovn-two-node-fencing-dualstack-recovery-techpreview |
|
@kasturinarra: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command
See details on https://pr-payload-tests.ci.openshift.org/runs/ci/11699ea0-343d-11f1-9e16-74cf6366d2af-0 |
|
/payload-job periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ovn-two-node-fencing-techpreview |
|
@kasturinarra: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command
See details on https://pr-payload-tests.ci.openshift.org/runs/ci/02c33f20-34ae-11f1-9767-26657c4e57f4-0 |
|
/payload-job periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ovn-two-node-fencing-techpreview |
|
@kasturinarra: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command
See details on https://pr-payload-tests.ci.openshift.org/runs/ci/d62d7880-34c2-11f1-83fa-4c9c0491063f-0 |
|
/approve |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: eggfoobar, fonta-rh, kasturinarra The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
/payload-job periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ovn-two-node-fencing-recovery-techpreview |
|
@kasturinarra: trigger 3 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command
See details on https://pr-payload-tests.ci.openshift.org/runs/ci/e48fc380-34e8-11f1-9174-b44e85eef264-0 |
|
@jaypoulz: This PR has been marked as verified by DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
No description provided.