Skip to content

fix: correct state and PeerDisconnected events in removal paths#436

Merged
xdustinface merged 1 commit intov0.42-devfrom
refactor/remove-peer-helper
Feb 17, 2026
Merged

fix: correct state and PeerDisconnected events in removal paths#436
xdustinface merged 1 commit intov0.42-devfrom
refactor/remove-peer-helper

Conversation

@xdustinface
Copy link
Collaborator

@xdustinface xdustinface commented Feb 14, 2026

  • Fixing inconsistent states from disconnect_peer and the maintenance loop's health check. They removed peers without decrementing the connection counter or emitting PeerDisconnected / PeersUpdated events.
  • Extracted remove_peer_and_notify and notify_peer_removed helpers to ensure consistent cleanup across all removal paths.

Based on:

Summary by CodeRabbit

  • Bug Fixes

    • Ensured peer disconnection notifications and connected-peer counts are emitted consistently across disconnect, handshake, and maintenance scenarios.
  • Refactor

    • Consolidated peer removal and notification flow so unhealthy peers are detected, removed, and reported uniformly during maintenance and error paths.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 14, 2026

Warning

Rate limit exceeded

@xdustinface has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 11 minutes and 28 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📝 Walkthrough

Walkthrough

Refactors peer-removal flow to centralize removal, counter updates, and PeerDisconnected/PeersUpdated events via new helpers in the network manager; pool health-check now returns unhealthy peer addresses (removed by caller) instead of performing internal cleanup and logging.

Changes

Cohort / File(s) Summary
Network Manager
dash-spv/src/network/manager.rs
Added notify_peer_removed and remove_peer_and_notify helpers; replaced ad-hoc removals with these helpers across peer reader shutdown, maintenance tick, connect/handshake failure paths, and disconnect_peer to ensure consistent counter updates and event emission.
Network Pool Health Checks
dash-spv/src/network/pool.rs
Replaced cleanup_disconnected() with remove_unhealthy() -> Vec<SocketAddr>: collects unhealthy addresses under read lock, removes them under write lock, and returns the addresses for the caller to handle events/logging.
Event/State Flow
.../manager.rs, .../pool.rs
Shifted responsibility for emitting PeerDisconnected/PeersUpdated and maintaining connected-peer counts to manager helpers; pool no longer emits per-peer logs during cleanup and instead returns addresses for external handling.

Sequence Diagram(s)

sequenceDiagram
    participant Manager
    participant Pool
    participant PeerReader
    participant EventBus

    PeerReader->>Manager: notify shutdown(for address)
    Manager->>Manager: remove_peer_and_notify(addr)
    Manager->>Pool: remove peer entry(addr)
    Pool-->>Manager: removal result
    Manager->>EventBus: emit PeerDisconnected(addr)
    Manager->>EventBus: emit PeersUpdated(current_count)
Loading
sequenceDiagram
    participant Manager
    participant Pool
    participant EventBus

    Manager->>Pool: remove_unhealthy()
    Pool-->>Manager: Vec<SocketAddr> (unhealthy)
    loop for each addr
        Manager->>Manager: notify_peer_removed(addr)
        Manager->>EventBus: emit PeerDisconnected(addr)
    end
    Manager->>EventBus: emit PeersUpdated(current_count)
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Poem

🐰 I nudge the peers, I tidy the nest,
Helpers hop in, doing their best,
Counts kept tidy, events sent true,
Connections pruned like morning dew,
A tiny hop — the network's new zest! 🥕

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately captures the main objective of the PR: fixing state management and event emission in peer removal paths by centralizing cleanup logic.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch refactor/remove-peer-helper

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions github-actions bot added the merge-conflict The PR conflicts with the target branch. label Feb 16, 2026
@github-actions
Copy link

This PR has merge conflicts with the base branch. Please rebase or merge the base branch into your branch to resolve them.

Base automatically changed from refactor/split-loop to v0.42-dev February 17, 2026 00:48
@xdustinface xdustinface force-pushed the refactor/remove-peer-helper branch from 5b1bd05 to 3c52c6c Compare February 17, 2026 10:07
@github-actions github-actions bot removed the merge-conflict The PR conflicts with the target branch. label Feb 17, 2026
@xdustinface xdustinface marked this pull request as ready for review February 17, 2026 10:13
@xdustinface
Copy link
Collaborator Author

@CodeRabbit review

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 17, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (1)
dash-spv/src/network/manager.rs (1)

1288-1292: Shutdown path bypasses event emission.

shutdown() still calls pool.remove_peer() directly without remove_peer_and_notify. If consumers rely on PeerDisconnected events to perform their own cleanup, they won't receive them during shutdown. This may be intentional (shutdown_token cancellation is the primary signal), but it's worth confirming that downstream subscribers don't need these events.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@dash-spv/src/network/manager.rs` around lines 1288 - 1292, The shutdown path
currently removes peers by calling self.pool.remove_peer(&addr).await which
bypasses event emission; change it to call the pool's remove_peer_and_notify (or
equivalent notify-aware API) for each address so PeerDisconnected events are
emitted to subscribers during shutdown (e.g., replace
self.pool.remove_peer(&addr).await with
self.pool.remove_peer_and_notify(&addr).await and handle any Result/Errors as
needed), and ensure this interaction coexists correctly with shutdown_token
cancellation semantics so downstream subscribers still observe PeerDisconnected
events.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@dash-spv/src/network/manager.rs`:
- Around line 789-802: The maintenance path is notifying peers that may already
have been removed by the reader loop, causing double-decrements and duplicate
PeerDisconnected events; update remove_unhealthy (in pool.rs) so it only
includes addresses that were actually removed from the internal peers map by
checking the Option returned by HashMap::remove and collecting only those
addresses for return, so maintenance_tick’s call to notify_peer_removed will
only run for peers that were truly removed by remove_unhealthy (leave
maintenance_tick, notify_peer_removed, and remove_peer_and_notify logic
unchanged).
- Around line 339-358: notify_peer_removed currently calls
connected_peer_count.fetch_sub(1, Ordering::Relaxed) which can wrap to
usize::MAX if the counter is already zero; change this to a saturating decrement
(use AtomicUsize::fetch_update or a compare_exchange CAS loop) so the counter
never goes below 0, then load the stable count and proceed to send
PeerDisconnected and PeersUpdated; reference the notify_peer_removed function
and the connected_peer_count AtomicUsize (and ensure consistency with
is_connected()/peer_count() trait methods that consume this counter).

---

Nitpick comments:
In `@dash-spv/src/network/manager.rs`:
- Around line 1288-1292: The shutdown path currently removes peers by calling
self.pool.remove_peer(&addr).await which bypasses event emission; change it to
call the pool's remove_peer_and_notify (or equivalent notify-aware API) for each
address so PeerDisconnected events are emitted to subscribers during shutdown
(e.g., replace self.pool.remove_peer(&addr).await with
self.pool.remove_peer_and_notify(&addr).await and handle any Result/Errors as
needed), and ensure this interaction coexists correctly with shutdown_token
cancellation semantics so downstream subscribers still observe PeerDisconnected
events.

@xdustinface xdustinface force-pushed the refactor/remove-peer-helper branch from 3c52c6c to cea4bcb Compare February 17, 2026 14:18
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
dash-spv/src/network/manager.rs (1)

1267-1296: Note: shutdown still uses raw pool.remove_peer without events.

This is likely intentional since the shutdown token is already cancelled and task join is complete at this point, but worth documenting if any consumer ever relies on PeerDisconnected for final cleanup. A brief comment at line 1292 would clarify intent.

Optional: add a clarifying comment
         // Disconnect all peers
+        // No PeerDisconnected events emitted here — shutdown_token already signals all consumers.
         for addr in self.pool.get_connected_addresses().await {
             self.pool.remove_peer(&addr).await;
         }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@dash-spv/src/network/manager.rs` around lines 1267 - 1296, The shutdown
method currently cancels shutdown_token, waits for tasks in tasks.join_next(),
and then calls pool.remove_peer for each address without emitting
PeerDisconnected events; add a brief clarifying comment in the shutdown function
(near the loop that iterates pool.get_connected_addresses and calls
pool.remove_peer) stating that this is intentional because shutdown_token has
been cancelled and tasks have been joined so any PeerDisconnected handlers
should not be relied upon for final cleanup, and mention that if consumers need
PeerDisconnected they should not rely on shutdown to trigger it or should be
adjusted accordingly (reference symbols: shutdown, shutdown_token.cancel,
tasks.join_next, pool.get_connected_addresses, pool.remove_peer,
PeerDisconnected).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@dash-spv/src/network/manager.rs`:
- Around line 339-362: notify_peer_removed currently calls
connected_peer_count.fetch_update then separately connected_peer_count.load,
which can yield a stale count; instead capture the result from fetch_update (the
previous value) and derive the new count from that (e.g. saturating subtraction)
to provide an atomic view for the PeersUpdated event, updating the logic around
fetch_update/ sub_result and replacing the subsequent load() usage so the count
in NetworkEvent::PeersUpdated is computed from the fetch_update result within
notify_peer_removed.

---

Nitpick comments:
In `@dash-spv/src/network/manager.rs`:
- Around line 1267-1296: The shutdown method currently cancels shutdown_token,
waits for tasks in tasks.join_next(), and then calls pool.remove_peer for each
address without emitting PeerDisconnected events; add a brief clarifying comment
in the shutdown function (near the loop that iterates
pool.get_connected_addresses and calls pool.remove_peer) stating that this is
intentional because shutdown_token has been cancelled and tasks have been joined
so any PeerDisconnected handlers should not be relied upon for final cleanup,
and mention that if consumers need PeerDisconnected they should not rely on
shutdown to trigger it or should be adjusted accordingly (reference symbols:
shutdown, shutdown_token.cancel, tasks.join_next, pool.get_connected_addresses,
pool.remove_peer, PeerDisconnected).

- Fixing inconsistent states from `disconnect_peer` and the maintenance loop's health check. They removed peers without decrementing the connection counter or emitting `PeerDisconnected` / `PeersUpdated` events.
- Extracted `remove_peer_and_notify` and `notify_peer_removed` helpers to ensure consistent cleanup across all removal paths.
@xdustinface xdustinface force-pushed the refactor/remove-peer-helper branch from cea4bcb to 8b91a53 Compare February 17, 2026 14:22
@xdustinface xdustinface merged commit 45af84c into v0.42-dev Feb 17, 2026
53 checks passed
@xdustinface xdustinface deleted the refactor/remove-peer-helper branch February 17, 2026 22:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants