fix: correct state and PeerDisconnected events in removal paths#436
fix: correct state and PeerDisconnected events in removal paths#436xdustinface merged 1 commit intov0.42-devfrom
PeerDisconnected events in removal paths#436Conversation
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📝 WalkthroughWalkthroughRefactors peer-removal flow to centralize removal, counter updates, and PeerDisconnected/PeersUpdated events via new helpers in the network manager; pool health-check now returns unhealthy peer addresses (removed by caller) instead of performing internal cleanup and logging. Changes
Sequence Diagram(s)sequenceDiagram
participant Manager
participant Pool
participant PeerReader
participant EventBus
PeerReader->>Manager: notify shutdown(for address)
Manager->>Manager: remove_peer_and_notify(addr)
Manager->>Pool: remove peer entry(addr)
Pool-->>Manager: removal result
Manager->>EventBus: emit PeerDisconnected(addr)
Manager->>EventBus: emit PeersUpdated(current_count)
sequenceDiagram
participant Manager
participant Pool
participant EventBus
Manager->>Pool: remove_unhealthy()
Pool-->>Manager: Vec<SocketAddr> (unhealthy)
loop for each addr
Manager->>Manager: notify_peer_removed(addr)
Manager->>EventBus: emit PeerDisconnected(addr)
end
Manager->>EventBus: emit PeersUpdated(current_count)
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
1d9fe8e to
0a40638
Compare
|
This PR has merge conflicts with the base branch. Please rebase or merge the base branch into your branch to resolve them. |
5b1bd05 to
3c52c6c
Compare
|
@CodeRabbit review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
dash-spv/src/network/manager.rs (1)
1288-1292: Shutdown path bypasses event emission.
shutdown()still callspool.remove_peer()directly withoutremove_peer_and_notify. If consumers rely onPeerDisconnectedevents to perform their own cleanup, they won't receive them during shutdown. This may be intentional (shutdown_token cancellation is the primary signal), but it's worth confirming that downstream subscribers don't need these events.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv/src/network/manager.rs` around lines 1288 - 1292, The shutdown path currently removes peers by calling self.pool.remove_peer(&addr).await which bypasses event emission; change it to call the pool's remove_peer_and_notify (or equivalent notify-aware API) for each address so PeerDisconnected events are emitted to subscribers during shutdown (e.g., replace self.pool.remove_peer(&addr).await with self.pool.remove_peer_and_notify(&addr).await and handle any Result/Errors as needed), and ensure this interaction coexists correctly with shutdown_token cancellation semantics so downstream subscribers still observe PeerDisconnected events.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@dash-spv/src/network/manager.rs`:
- Around line 789-802: The maintenance path is notifying peers that may already
have been removed by the reader loop, causing double-decrements and duplicate
PeerDisconnected events; update remove_unhealthy (in pool.rs) so it only
includes addresses that were actually removed from the internal peers map by
checking the Option returned by HashMap::remove and collecting only those
addresses for return, so maintenance_tick’s call to notify_peer_removed will
only run for peers that were truly removed by remove_unhealthy (leave
maintenance_tick, notify_peer_removed, and remove_peer_and_notify logic
unchanged).
- Around line 339-358: notify_peer_removed currently calls
connected_peer_count.fetch_sub(1, Ordering::Relaxed) which can wrap to
usize::MAX if the counter is already zero; change this to a saturating decrement
(use AtomicUsize::fetch_update or a compare_exchange CAS loop) so the counter
never goes below 0, then load the stable count and proceed to send
PeerDisconnected and PeersUpdated; reference the notify_peer_removed function
and the connected_peer_count AtomicUsize (and ensure consistency with
is_connected()/peer_count() trait methods that consume this counter).
---
Nitpick comments:
In `@dash-spv/src/network/manager.rs`:
- Around line 1288-1292: The shutdown path currently removes peers by calling
self.pool.remove_peer(&addr).await which bypasses event emission; change it to
call the pool's remove_peer_and_notify (or equivalent notify-aware API) for each
address so PeerDisconnected events are emitted to subscribers during shutdown
(e.g., replace self.pool.remove_peer(&addr).await with
self.pool.remove_peer_and_notify(&addr).await and handle any Result/Errors as
needed), and ensure this interaction coexists correctly with shutdown_token
cancellation semantics so downstream subscribers still observe PeerDisconnected
events.
3c52c6c to
cea4bcb
Compare
There was a problem hiding this comment.
🧹 Nitpick comments (1)
dash-spv/src/network/manager.rs (1)
1267-1296: Note: shutdown still uses rawpool.remove_peerwithout events.This is likely intentional since the shutdown token is already cancelled and task join is complete at this point, but worth documenting if any consumer ever relies on
PeerDisconnectedfor final cleanup. A brief comment at line 1292 would clarify intent.Optional: add a clarifying comment
// Disconnect all peers + // No PeerDisconnected events emitted here — shutdown_token already signals all consumers. for addr in self.pool.get_connected_addresses().await { self.pool.remove_peer(&addr).await; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv/src/network/manager.rs` around lines 1267 - 1296, The shutdown method currently cancels shutdown_token, waits for tasks in tasks.join_next(), and then calls pool.remove_peer for each address without emitting PeerDisconnected events; add a brief clarifying comment in the shutdown function (near the loop that iterates pool.get_connected_addresses and calls pool.remove_peer) stating that this is intentional because shutdown_token has been cancelled and tasks have been joined so any PeerDisconnected handlers should not be relied upon for final cleanup, and mention that if consumers need PeerDisconnected they should not rely on shutdown to trigger it or should be adjusted accordingly (reference symbols: shutdown, shutdown_token.cancel, tasks.join_next, pool.get_connected_addresses, pool.remove_peer, PeerDisconnected).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@dash-spv/src/network/manager.rs`:
- Around line 339-362: notify_peer_removed currently calls
connected_peer_count.fetch_update then separately connected_peer_count.load,
which can yield a stale count; instead capture the result from fetch_update (the
previous value) and derive the new count from that (e.g. saturating subtraction)
to provide an atomic view for the PeersUpdated event, updating the logic around
fetch_update/ sub_result and replacing the subsequent load() usage so the count
in NetworkEvent::PeersUpdated is computed from the fetch_update result within
notify_peer_removed.
---
Nitpick comments:
In `@dash-spv/src/network/manager.rs`:
- Around line 1267-1296: The shutdown method currently cancels shutdown_token,
waits for tasks in tasks.join_next(), and then calls pool.remove_peer for each
address without emitting PeerDisconnected events; add a brief clarifying comment
in the shutdown function (near the loop that iterates
pool.get_connected_addresses and calls pool.remove_peer) stating that this is
intentional because shutdown_token has been cancelled and tasks have been joined
so any PeerDisconnected handlers should not be relied upon for final cleanup,
and mention that if consumers need PeerDisconnected they should not rely on
shutdown to trigger it or should be adjusted accordingly (reference symbols:
shutdown, shutdown_token.cancel, tasks.join_next, pool.get_connected_addresses,
pool.remove_peer, PeerDisconnected).
- Fixing inconsistent states from `disconnect_peer` and the maintenance loop's health check. They removed peers without decrementing the connection counter or emitting `PeerDisconnected` / `PeersUpdated` events. - Extracted `remove_peer_and_notify` and `notify_peer_removed` helpers to ensure consistent cleanup across all removal paths.
cea4bcb to
8b91a53
Compare
disconnect_peerand the maintenance loop's health check. They removed peers without decrementing the connection counter or emittingPeerDisconnected/PeersUpdatedevents.remove_peer_and_notifyandnotify_peer_removedhelpers to ensure consistent cleanup across all removal paths.Based on:
Summary by CodeRabbit
Bug Fixes
Refactor