Feat/mempool bloom filters chain management#80
Conversation
|
Important Review skippedAuto reviews are disabled on base/target branches other than the default branch. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the WalkthroughAdds QRInfo-driven masternode sync, batch/reorg-aware header sync, storage padding and state APIs, chain lock validation module, expanded masternode engine APIs, FFI platform functions, and Swift exposure. Updates checkpoints, network handlers, client config, and tests. Introduces DKG window utilities in dashcore and workspace/test scaffolding. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant SyncManager
participant Network
participant MasternodeSync
participant Storage
participant Engine
Client->>SyncManager: start_sync()
SyncManager->>Network: request_qr_info(base_hashes, tip_hash, extra_share)
Network-->>SyncManager: QRInfo message
SyncManager->>MasternodeSync: handle_qrinfo(qr_info, storage, network)
MasternodeSync->>Storage: resolve heights for hashes
Storage-->>MasternodeSync: heights
MasternodeSync->>Engine: feed_qr_info(qr_info, heights)
Engine-->>MasternodeSync: updated MN lists/quorums
MasternodeSync-->>SyncManager: progress updated
sequenceDiagram
participant SyncMgr as HeaderSync(Reorg)
participant Network
participant Storage
SyncMgr->>Storage: prepare_sync()/load_headers_from_storage()
SyncMgr->>Network: request_headers(base_hash)
Network-->>SyncMgr: Headers(batch)
SyncMgr->>SyncMgr: validate/connect to tip (checkpoint-aware)
SyncMgr->>Storage: store headers (bulk)
SyncMgr-->>Network: next request or done
Estimated code review effort🎯 5 (Critical) | ⏱️ ~90 minutes Possibly related PRs
Suggested reviewers
Poem
✨ Finishing Touches🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 45
🔭 Outside diff range comments (3)
dash-spv/src/storage/memory.rs (1)
196-265: Add missing fields to storage statistics calculation.The
stats()method doesn't include the newly added fields (terminal_blocks,mempool_transactions,mempool_state) in the size calculations, which will result in inaccurate memory usage reporting.Add these calculations after line 234:
.sum(); +// Calculate size of terminal blocks +let terminal_blocks_size = self.terminal_blocks.len() + * (std::mem::size_of::<u32>() + std::mem::size_of::<StoredTerminalBlock>()); + +// Calculate size of mempool transactions +let mempool_tx_size = self.mempool_transactions.len() + * (std::mem::size_of::<Txid>() + std::mem::size_of::<UnconfirmedTransaction>()); + +// Calculate size of mempool state +let mempool_state_size = if self.mempool_state.is_some() { + std::mem::size_of::<MempoolState>() +} else { + 0 +}; // Insert all component sizesAnd update the component_sizes and total_size calculation:
component_sizes.insert("utxo_address_index".to_string(), utxo_address_index_size as u64); +component_sizes.insert("terminal_blocks".to_string(), terminal_blocks_size as u64); +component_sizes.insert("mempool_transactions".to_string(), mempool_tx_size as u64); +component_sizes.insert("mempool_state".to_string(), mempool_state_size as u64); // Calculate total size let total_size = header_size as u64 + filter_header_size as u64 + filter_size as u64 + metadata_size as u64 + masternode_state_size as u64 + chain_state_size as u64 + header_hash_index_size as u64 + utxo_size as u64 - + utxo_address_index_size as u64; + + utxo_address_index_size as u64 + + terminal_blocks_size as u64 + + mempool_tx_size as u64 + + mempool_state_size as u64;dash-spv/src/sync/filters.rs (1)
1448-1468: Remove underscore prefix from_storageparameter as it is used.The
_storageparameter is actually used within the function body at line 1468, so it should not have an underscore prefix. The underscore convention is only for genuinely unused parameters.async fn handle_request_timeout( &mut self, range: (u32, u32), _network: &mut dyn NetworkManager, - _storage: &dyn StorageManager, + storage: &dyn StorageManager, ) -> SyncResult<()> {dash-spv/src/client/mod.rs (1)
1349-1365: Integrate mempool balances into address balance calculation.The
pendingandpending_instantfields are hardcoded to 0, which doesn't reflect actual mempool transactions when mempool tracking is enabled.Integrate mempool balances:
pub async fn get_address_balance(&self, address: &dashcore::Address) -> Result<AddressBalance> { // Use wallet to get balance directly let wallet = self.wallet.read().await; let balance = wallet.get_balance_for_address(address).await.map_err(|e| { SpvError::Storage(crate::error::StorageError::ReadFailed(format!( "Wallet error: {}", e ))) })?; + + // Get mempool balance if mempool tracking is enabled + let (pending, pending_instant) = if self.config.enable_mempool_tracking { + let mempool_balance = self.get_mempool_balance(address).await?; + (mempool_balance.pending, mempool_balance.pending_instant) + } else { + (dashcore::Amount::from_sat(0), dashcore::Amount::from_sat(0)) + }; Ok(AddressBalance { confirmed: balance.confirmed + balance.instantlocked, unconfirmed: balance.pending, - pending: dashcore::Amount::from_sat(0), - pending_instant: dashcore::Amount::from_sat(0), + pending, + pending_instant, }) }
♻️ Duplicate comments (2)
dash-spv/src/bloom/manager.rs (2)
106-106: Preserve bloom filter error details.Same as in
builder.rs, consider preserving the original error information.
284-298: Refactor duplicated utility functions.As mentioned in the review of
builder.rs, these utility functions should be moved to a shared module.Also applies to: 301-307
🧹 Nitpick comments (53)
dash-spv/src/lib.rs (1)
50-51: Major API expansion with new core modules.The addition of
bloom,chain, andmempool_filtermodules represents significant new functionality for:
- Bloom filter management and statistics tracking
- Advanced chain state management (forks, reorgs, checkpoints)
- Configurable mempool transaction filtering
These align well with the PR's objectives to enhance SPV client capabilities.
Consider updating the crate-level documentation to highlight these new major features, as they significantly expand the library's capabilities beyond the current feature list.
Also applies to: 54-54
dash-spv/src/client/status_display.rs (1)
59-59: TODO item needs attention - incomplete implementation.The hardcoded
falsevalue forfilter_sync_availablewith a TODO comment suggests this feature is incomplete. This could mislead consumers about actual filter sync availability.Consider either:
- Implementing the proper network manager integration now
- Adding clear documentation that this field is not yet functional
- Creating a tracking issue for completing this implementation
Do you want me to help create an issue to track completing the
filter_sync_availableimplementation, or would you prefer to implement the network manager integration in this PR?dash-spv/data/mainnet/mod.rs (1)
5-174: Consider adding error logging for failed deserializations.The function silently ignores deserialization errors, which could make debugging difficult if terminal block data becomes corrupted. Consider adding logging for failed deserializations in debug builds.
For debugging purposes, you could add error logging:
if let Ok(state) = serde_json::from_str::<TerminalBlockMasternodeState>(data) { manager.add_state(state); +} else { + #[cfg(debug_assertions)] + eprintln!("Failed to deserialize terminal block data for height {}", height); }dash-spv/examples/test_terminal_blocks.rs (1)
13-13: Consider making test heights configurable.The hardcoded test heights work well for the example, but consider adding command-line argument support to test arbitrary heights for more flexible testing.
+use std::env; + fn main() { + let args: Vec<String> = env::args().collect(); + let test_heights = if args.len() > 1 { + args[1..].iter().filter_map(|s| s.parse().ok()).collect() + } else { + vec![387480, 400000, 450000, 500000, 550000, 600000, 650000, 700000, 750000, 760000, 800000, 850000, 900000] + }; - let test_heights = vec![387480, 400000, 450000, 500000, 550000, 600000, 650000, 700000, 750000, 760000, 800000, 850000, 900000];CLAUDE.md (1)
185-185: Minor typographical improvement needed.Consider using an en dash for the version range to improve readability.
-- Support for Dash Core versions 0.18.0 - 0.21.0 +- Support for Dash Core versions 0.18.0–0.21.0dash-spv/CLAUDE.md (1)
104-104: Minor formatting improvement for compound adjective.Consider using a hyphen for the compound adjective to improve readability.
-- **Phase 4: Filters** - Download specific filters on demand +- **Phase 4: Filters** - Download specific filters on-demanddash-spv/src/client/block_processor.rs (1)
275-280: Consider handling event send failures.The current implementation silently ignores event send failures, which could lead to missed notifications.
- let _ = self.event_tx.send(SpvEvent::BlockProcessed { + if let Err(e) = self.event_tx.send(SpvEvent::BlockProcessed { height: block_height, hash: block_hash.to_string(), transactions_count: block.txdata.len(), relevant_transactions, - }); + }) { + tracing::warn!("Failed to send block processed event: {}", e); + }dash-spv/scripts/fetch_terminal_blocks.py (3)
39-39: Add required blank lines before function definitions.Per PEP 8, function definitions should be preceded by two blank lines.
} def run_dash_cli(network, *args, parse_json=True):} def fetch_terminal_block_data(network, height, genesis_hash):} def main():Also applies to: 61-61, 98-98
153-153: Remove unnecessary f-string prefixes.These strings don't contain placeholders, so the
fprefix is unnecessary.- f.write(f' // Terminal block {height}\n') - f.write(f' {{\n') - f.write(f' let data = include_str!("terminal_block_{height}.json");\n') - f.write(f' if let Ok(state) = serde_json::from_str::<TerminalBlockMasternodeState>(data) {{\n') - f.write(f' manager.add_state(state);\n') - f.write(f' }}\n') - f.write(f' }}\n\n') + f.write(f' // Terminal block {height}\n') + f.write(' {\n') + f.write(f' let data = include_str!("terminal_block_{height}.json");\n') + f.write(' if let Ok(state) = serde_json::from_str::<TerminalBlockMasternodeState>(data) {\n') + f.write(' manager.add_state(state);\n') + f.write(' }\n') + f.write(' }\n\n')Also applies to: 155-158
48-59: Consider simplifying the conditional structure.The
elseafterreturnis unnecessary and can be removed for cleaner code flow.try: result = subprocess.run(cmd, capture_output=True, text=True, check=True) if parse_json: return json.loads(result.stdout) - else: - return result.stdout.strip() + return result.stdout.strip() except subprocess.CalledProcessError as e:dash-spv-ffi/tests/test_mempool_tracking.rs (1)
160-180: Handle network connectivity gracefully in tests.The test correctly handles the case where network connectivity may not be available during testing by allowing
dash_spv_ffi_client_startto fail. This makes the tests more robust in CI environments.Consider adding a comment explaining why the network failure is expected and acceptable in this test context.
// Start client (would fail without network but tests structure) let result = dash_spv_ffi_client_start(client); - // Allow failure since we're not connected to network + // Allow failure since we're not connected to a real Dash network in tests if result == 0 {dash-spv/src/sync/headers2_state.rs (1)
119-137: Consider potential overflow in bandwidth savings calculation.The bandwidth savings calculation assumes 80 bytes per uncompressed header but doesn't handle potential overflow when multiplying large header counts.
pub fn bandwidth_savings(&self) -> f64 { if self.total_bytes_received == 0 { 0.0 } else { - let uncompressed_size = self.total_headers_received * 80; + let uncompressed_size = self.total_headers_received.saturating_mul(80); let savings = (uncompressed_size - self.total_bytes_received) as f64; (savings / uncompressed_size as f64) * 100.0 } }dash-spv-ffi/tests/test_event_callbacks.rs (2)
31-36: Review the unsafe pointer dereference pattern.The callback functions use unsafe pointer dereferencing without additional safety checks beyond the basic cast. While this is typical for FFI callbacks, consider documenting the safety assumptions or adding debug assertions.
extern "C" fn test_block_callback(height: u32, _hash: *const c_char, user_data: *mut c_void) { println!("Test block callback called: height={}", height); + debug_assert!(!user_data.is_null(), "User data pointer should not be null"); let data = unsafe { &*(user_data as *const TestEventData) }; data.block_received.store(true, Ordering::SeqCst); data.block_height.store(height, Ordering::SeqCst); }
107-107: Consider making the sleep duration configurable.The hardcoded 5-second sleep may be insufficient in CI environments or on slower systems, potentially causing flaky tests.
- thread::sleep(Duration::from_secs(5)); + let wait_duration = std::env::var("FFI_TEST_WAIT_SECS") + .ok() + .and_then(|s| s.parse().ok()) + .unwrap_or(5); + thread::sleep(Duration::from_secs(wait_duration));dash-spv-ffi/src/config.rs (1)
254-254: Verify the type conversion for max_transactions.The conversion from
u32tousizeis safe on most platforms, but consider documenting the potential truncation on 16-bit systems (though unlikely in practice).- config.max_mempool_transactions = max_transactions as usize; + config.max_mempool_transactions = max_transactions as usize; // Safe: u32 -> usize conversiondash-spv/src/network/mod.rs (1)
231-241: Consider edge case handling in height conversion.The height conversion from
i32tou32could potentially handle negative heights more explicitly, though negative heights shouldn't occur in practice.if let Some(connection) = &self.connection { // For single peer connection, return the peer's best height match connection.peer_info().best_height { - Some(height) if height > 0 => Ok(Some(height as u32)), + Some(height) if height > 0 => Ok(Some(height as u32)), + Some(height) if height < 0 => { + tracing::warn!("Received negative best height: {}", height); + Ok(None) + } _ => Ok(None), }dash-spv/src/client/config.rs (1)
327-339: Consider additional validation and shorter error messages.The validation logic is good, but consider:
- The error message on line 330 could be shortened to match the style of other messages
- Add validation that
recent_send_window_secs<mempool_timeout_secswhen using Selective strategy// Mempool validation if self.enable_mempool_tracking { if self.max_mempool_transactions == 0 { - return Err("max_mempool_transactions must be > 0 when mempool tracking is enabled".to_string()); + return Err("max_mempool_transactions must be > 0".to_string()); } if self.mempool_timeout_secs == 0 { return Err("mempool_timeout_secs must be > 0".to_string()); } if self.mempool_strategy == MempoolStrategy::Selective && self.recent_send_window_secs == 0 { return Err("recent_send_window_secs must be > 0 for Selective strategy".to_string()); } + if self.mempool_strategy == MempoolStrategy::Selective && self.recent_send_window_secs >= self.mempool_timeout_secs { + return Err("recent_send_window_secs must be < mempool_timeout_secs".to_string()); + } }dash-spv/src/chain/orphan_pool.rs (3)
73-84: Avoid unnecessary header cloning.The header is cloned twice - once on line 74 and again within the orphan on line 83.
// Create orphan entry let orphan = OrphanBlock { - header: header.clone(), + header: *header, received_at: Instant::now(), process_attempts: 0, };
98-115: Consider renaming to indicate mutation.The method mutates
process_attemptswhich is unusual for a getter. Consider renaming toget_and_mark_orphans_by_prevor adding a doc comment explaining this side effect.
210-222: Remove unused variable assignment.Line 217 assigns to
_block_hashbut never uses it.// Remove these from the pool since we're processing them for header in &orphans { - let _block_hash = header.block_hash(); self.remove_orphan(&header.block_hash()); }dash-spv/src/sync/sequential/phases.rs (1)
193-360: Consider refactoring the large progress() method.The method contains repetitive calculations that could be extracted into helper functions.
Consider extracting common calculations:
impl SyncPhase { fn calculate_percentage(completed: u32, total: Option<u32>) -> f64 { match total { Some(t) if t > 0 => (completed as f64 / t as f64) * 100.0, Some(_) => 100.0, None => 0.0, } } fn calculate_eta(remaining: u32, rate: f64) -> Option<Duration> { if rate > 0.0 { Some(Duration::from_secs_f64(remaining as f64 / rate)) } else { None } } fn calculate_rate(items: u32, elapsed: Duration) -> f64 { if elapsed.as_secs() > 0 { items as f64 / elapsed.as_secs_f64() } else { 0.0 } } }This would significantly reduce the code duplication in the
progress()method.dash-spv/src/mempool_filter.rs (1)
151-154: Address TODO items for complete implementation.The fee calculation and InstantSend detection need to be implemented for full functionality.
Would you like me to help implement the fee calculation and InstantSend detection logic?
dash-spv/src/chain/chain_tip.rs (2)
160-175: Improve error message accuracy.The error message "all tips are active" is misleading since only one tip can be active at a time.
} else { - Err("Cannot evict: all tips are active") + Err("Cannot evict: only the active tip remains") }
184-244: Consider adding test coverage for edge cases.The current tests cover basic functionality well, but consider adding tests for:
extend_tipmethod behaviorremove_tipincluding removal of active tip- Error cases (extending non-existent tip, eviction when only active tip remains)
- Concurrent modifications scenario
Would you like me to generate additional test cases to improve coverage?
dash-spv/src/sync/sequential/progress.rs (1)
44-60: Consider making phase weights configurable.The hard-coded phase weights may not reflect actual sync time distribution across different network conditions or blockchain states.
Consider:
- Making weights configurable through the constructor
- Adding documentation explaining the rationale for these specific weights
- Potentially adjusting weights dynamically based on historical sync data
- pub fn new() -> Self { + pub fn new() -> Self { + // Weights based on typical sync time distribution: + // - Headers download typically takes the longest (40%) + // - Filter headers and filters are substantial (20% each) + // - Masternode lists and blocks are relatively quick (10% each) let mut phase_weights = std::collections::HashMap::new();dash-spv/src/network/handshake.rs (1)
256-257: Fix misleading comment about advertised services.The comment contradicts the code - it says "no other services" but NODE_HEADERS_COMPRESSED is being advertised.
- // SPV client advertises headers2 support but no other services + // SPV client advertises headers2 compression support let services = NODE_HEADERS_COMPRESSED;dash-spv/src/bloom/builder.rs (1)
127-127: Preserve bloom filter error details.Consider preserving the original error information instead of converting to a generic string.
- ).map_err(|e| SpvError::General(format!("Failed to create bloom filter: {:?}", e)))?; + ).map_err(|e| SpvError::BloomFilter(e))?;Note: This assumes
SpvErrorhas or can add aBloomFiltervariant.dash-spv/src/storage/sync_state.rs (1)
337-386: Expand test coverage for sync state functionality.The current tests only cover basic validation. Consider adding tests for:
- Checkpoint creation and retrieval
- Recovery suggestion generation
- Edge cases in validation
- Serialization/deserialization
#[test] fn test_checkpoint_creation() { // Test that checkpoints are created at correct intervals } #[test] fn test_recovery_suggestions() { // Test different scenarios that generate recovery suggestions } #[test] fn test_serialization() { // Test that sync state can be serialized and deserialized correctly }dash-spv/src/bloom/manager.rs (1)
143-148: Remove redundant data collection.The
datavariable is created but not used correctly. The script bytes are accessed again on line 148.- let mut data = Vec::new(); self.add_address_to_filter(filter, address)?; // Get the script pubkey bytes let script = address.script_pubkey(); - data.extend_from_slice(script.as_bytes()); + let data = script.as_bytes().to_vec();dash-spv/src/sync/mod.rs (1)
30-39: Consider a clearer deprecation strategy.While the struct is marked deprecated, it still contains substantial implementation that might confuse users about whether to use it. Consider:
- Adding deprecation warnings to all public methods
- Documenting migration path more clearly
- Consider delegating to SequentialSyncManager internally if possible
dash-spv/src/chain/chainlock_manager.rs (1)
161-163: Add versioning to serialized data format.Using bincode without versioning can cause compatibility issues if the
ChainLockstructure changes in the future.Consider:
- Adding a version prefix to the serialized data
- Using a versioned wrapper struct
- Or implementing a migration strategy for format changes
+ #[derive(Serialize, Deserialize)] + struct VersionedChainLock { + version: u8, + chain_lock: ChainLock, + } let key = format!("chainlock:{}", chain_lock.block_height); - let data = bincode::serialize(&chain_lock) + let versioned = VersionedChainLock { version: 1, chain_lock }; + let data = bincode::serialize(&versioned) .map_err(|e| StorageError::Serialization(e.to_string()))?;dash-spv/src/bloom/stats.rs (3)
110-113: Avoid using 0 as a sentinel value for minimum query time.Using 0 to check if
min_query_time_usis uninitialized could fail if an actual query completes in less than 1 microsecond.Consider using
Option<u64>for clarity:- pub min_query_time_us: u64, + pub min_query_time_us: Option<u64>,And in the check:
- if self.stats.query_performance.min_query_time_us == 0 - || micros < self.stats.query_performance.min_query_time_us { - self.stats.query_performance.min_query_time_us = micros; + if self.stats.query_performance.min_query_time_us.map_or(true, |min| micros < min) { + self.stats.query_performance.min_query_time_us = Some(micros); }
157-160: Clarify or improve bandwidth estimation logic.The bandwidth estimation multiplies transaction size by 10 without clear justification. This arbitrary calculation makes the metric unreliable.
Consider:
- Using actual network statistics if available
- Documenting the rationale for the multiplier
- Making the multiplier configurable
- Or clearly labeling this as an order-of-magnitude estimate in the output
174-181: Consider alternative design to avoid mutable reference in getter.The
get_statsmethod requires&mut selfjust to update the elapsed time. This makes the API less ergonomic.Consider:
- Computing elapsed time on-demand when generating reports
- Using
CellorRefCellfor the timestamp if interior mutability is needed- Accepting that the elapsed time might be slightly stale
- Separating "update" and "get" operations
dash-spv/src/sync/sequential/mod.rs (4)
71-73: Consider making ReorgConfig configurable.The
ReorgConfigis created with defaults, but different networks or use cases might require different reorg parameters. Consider accepting it as a parameter or deriving it fromClientConfig.pub fn new( config: &ClientConfig, received_filter_heights: std::sync::Arc<std::sync::Mutex<std::collections::HashSet<u32>>>, + reorg_config: Option<ReorgConfig>, ) -> Self { // Create reorg config with sensible defaults - let reorg_config = ReorgConfig::default(); + let reorg_config = reorg_config.unwrap_or_default();
244-244: Make DEFAULT_FILTER_RANGE configurable.The hardcoded value of 10,000 blocks for filter download might not be suitable for all use cases. Consider making this configurable through
ClientConfig.-const DEFAULT_FILTER_RANGE: u32 = 10000; // Download last 10k blocks +let default_filter_range = self.config.default_filter_range.unwrap_or(10000);
404-480: Consider refactoring complex filter timeout handling.The filter download timeout handling has deeply nested conditions and complex control flow. Consider extracting this into a dedicated method like
handle_filter_download_timeoutto improve readability and maintainability.async fn handle_filter_download_timeout( &mut self, network: &mut dyn NetworkManager, storage: &mut dyn StorageManager, ) -> SyncResult<()> { // Extract the timeout handling logic here }
244-244: Document the rationale for DEFAULT_FILTER_RANGE.The 10,000 block default filter range (~69 days of blocks) is reasonable but the choice should be documented.
- const DEFAULT_FILTER_RANGE: u32 = 10000; // Download last 10k blocks + // Download last 10k blocks (~69 days at 10 min/block) + // This covers most wallet recovery scenarios while limiting bandwidth + const DEFAULT_FILTER_RANGE: u32 = 10000;dash-spv-ffi/src/callbacks.rs (1)
124-129: Consider making callback logging configurable.The extensive logging in callbacks is excellent for debugging but might be too verbose for production use. Consider using
debug!level or making it configurable.-tracing::info!("🎯 Calling block callback: height={}, hash={}", height, hash); +tracing::debug!("🎯 Calling block callback: height={}, hash={}", height, hash);Also applies to: 135-143, 149-154
dash-spv/src/sync/headers.rs (1)
211-211: Consider making the header sync timeout configurable.The 500ms timeout might be too aggressive for peers on slow networks or during network congestion. This could lead to unnecessary re-requests and increased network traffic.
-std::time::Duration::from_millis(500) +std::time::Duration::from_millis(self.config.header_sync_timeout_ms.unwrap_or(2000))dash-spv/src/chain/checkpoints.rs (1)
152-165: Consider optimizing the timestamp search using sorted heights.The current implementation iterates through all checkpoints. Since
sorted_heightsis already sorted, you could iterate in reverse order for better performance and clarity.Apply this diff to optimize the search:
-pub fn last_checkpoint_before_timestamp(&self, timestamp: u32) -> Option<&Checkpoint> { - let mut best_checkpoint = None; - let mut best_height = 0; - - for checkpoint in self.checkpoints.values() { - if checkpoint.timestamp <= timestamp && checkpoint.height >= best_height { - best_height = checkpoint.height; - best_checkpoint = Some(checkpoint); - } - } - - best_checkpoint -} +pub fn last_checkpoint_before_timestamp(&self, timestamp: u32) -> Option<&Checkpoint> { + // Iterate through sorted heights in reverse order + for &height in self.sorted_heights.iter().rev() { + if let Some(checkpoint) = self.checkpoints.get(&height) { + if checkpoint.timestamp <= timestamp { + return Some(checkpoint); + } + } + } + None +}dash-spv/src/client/message_handler.rs (1)
247-267: Consider extracting transaction fetch logicThe nested conditions make this code harder to read. Consider extracting the logic into a helper method.
- // Check if we should fetch this transaction - if let Some(filter) = self.mempool_filter { - if self.config.fetch_mempool_transactions && - filter.should_fetch_transaction(&txid).await { - tracing::info!("📥 Requesting transaction {}", txid); - // Request the transaction - let getdata = NetworkMessage::GetData(vec![item]); - if let Err(e) = self.network.send_message(getdata).await { - tracing::error!("Failed to request transaction {}: {}", txid, e); - } - } else { - tracing::debug!("Not fetching transaction {} (fetch_mempool_transactions={}, should_fetch={})", - txid, - self.config.fetch_mempool_transactions, - filter.should_fetch_transaction(&txid).await - ); - } - } else { - tracing::warn!("⚠️ Transaction {} announced but mempool tracking is disabled (enable_mempool_tracking=false)", txid); - } + // Check if we should fetch this transaction + if let Err(e) = self.handle_transaction_inventory(item, &txid).await { + tracing::error!("Failed to handle transaction inventory: {}", e); + }Add a helper method:
async fn handle_transaction_inventory(&self, item: Inventory, txid: &dashcore::Txid) -> Result<()> { if let Some(filter) = self.mempool_filter { if self.config.fetch_mempool_transactions && filter.should_fetch_transaction(txid).await { tracing::info!("📥 Requesting transaction {}", txid); let getdata = NetworkMessage::GetData(vec![item]); self.network.send_message(getdata).await.map_err(SpvError::Network)?; } else { tracing::debug!("Not fetching transaction {} (fetch_mempool_transactions={}, should_fetch={})", txid, self.config.fetch_mempool_transactions, filter.should_fetch_transaction(txid).await ); } } else { tracing::warn!("⚠️ Transaction {} announced but mempool tracking is disabled", txid); } Ok(()) }dash-spv/src/network/reputation.rs (1)
209-272: Consider returning reputation update resultThe method returns
boolfor ban status, but callers might benefit from more detailed information about the reputation change.Consider returning a richer result type:
pub enum ReputationUpdateResult { Updated { new_score: i32, old_score: i32 }, Banned { score: i32, ban_count: u32 }, } pub async fn update_reputation( &self, peer: SocketAddr, score_change: i32, reason: &str, ) -> ReputationUpdateResultdash-spv/src/network/multi_peer.rs (1)
363-367: Consider rate limiting GetHeaders2 requestsSPV clients ignoring GetHeaders2 is correct, but malicious peers could spam these requests. Consider adding rate limiting or reputation penalty for excessive requests.
NetworkMessage::GetHeaders2(_) => { // SPV clients don't serve compressed headers to peers log::debug!("Received GetHeaders2 from {} - ignoring (SPV client)", addr); + // TODO: Consider penalizing peers that repeatedly request headers from SPV clients continue; // Don't forward to client }dash-spv/src/network/connection.rs (1)
19-30: Consider read-write lock for better concurrencyThe unified mutex serializes all read and write operations, which could reduce throughput. Consider using
RwLockto allow concurrent reads.pub struct TcpConnection { address: SocketAddr, - // Use a single mutex to protect both the write stream and read buffer - // This ensures no concurrent access to the underlying socket - state: Option<Arc<Mutex<ConnectionState>>>, + // Use RwLock to allow concurrent reads while maintaining exclusive writes + state: Option<Arc<RwLock<ConnectionState>>>,However, note that this would require careful handling of the underlying
TcpStreamto ensure thread safety.Also applies to: 250-252
dash-spv/src/sync/filters.rs (1)
74-74: Consider using a getter method instead of making the field public.Making
syncing_filterspublic breaks encapsulation and allows external code to modify the sync state directly, which could lead to inconsistent state. Consider keeping the field private and adding a public getter method instead.dash-spv/src/storage/disk.rs (1)
1761-1792: Consider persistence for mempool dataThe mempool storage methods only store data in memory without persistence to disk. This means mempool state is lost on restart, which might be intentional but should be documented.
If persistence is desired, consider adding disk storage:
async fn persist_mempool_state(&self) -> StorageResult<()> { let mempool_dir = self.base_path.join("mempool"); tokio::fs::create_dir_all(&mempool_dir).await?; // Save transactions let txs = self.mempool_transactions.read().await; let tx_data = bincode::serialize(&*txs)?; let tx_path = mempool_dir.join("transactions.bin"); tokio::fs::write(&tx_path, tx_data).await?; // Save state if let Some(state) = &*self.mempool_state.read().await { let state_data = bincode::serialize(state)?; let state_path = mempool_dir.join("state.bin"); tokio::fs::write(&state_path, state_data).await?; } Ok(()) }dash-spv/src/client/mod.rs (4)
514-524: Consider updating the filter in-place instead of recreating it.Creating a new
MempoolFilterinstance every time watch items change could be inefficient, especially if this happens frequently.Consider adding an
update_watch_itemsmethod toMempoolFilterto update items in-place:- async fn update_mempool_filter(&mut self) { - let watch_items = self.watch_items.read().await.iter().cloned().collect(); - self.mempool_filter = Some(Arc::new(MempoolFilter::new( - self.config.mempool_strategy, - Duration::from_secs(self.config.recent_send_window_secs), - self.config.max_mempool_transactions, - self.mempool_state.clone(), - watch_items, - ))); - tracing::info!("Updated mempool filter with current watch items"); - } + async fn update_mempool_filter(&mut self) { + if let Some(ref mempool_filter) = self.mempool_filter { + let watch_items = self.watch_items.read().await.iter().cloned().collect(); + // Assuming MempoolFilter has or could have an update_watch_items method + mempool_filter.update_watch_items(watch_items).await; + tracing::info!("Updated mempool filter with current watch items"); + } + }
1133-1146: Consider adding backpressure handling for block processing.Using an unbounded channel for block processing could lead to memory issues if blocks arrive faster than they can be processed.
Consider monitoring queue depth or switching to a bounded channel:
// Add queue depth monitoring let queue_depth = self.block_processor_tx.len(); if queue_depth > 1000 { tracing::warn!("Block processor queue depth high: {}", queue_depth); } // Or switch to bounded channel with backpressure handling
2141-2142: Improve log message formatting for optional height.The log message could print "None" when no height is stored, which is not user-friendly.
- let stored_height = self.storage.get_tip_height().await.map_err(|e| SpvError::Storage(e))?; - tracing::info!("✅ Genesis block initialized at height 0, storage reports tip height: {:?}", stored_height); + let stored_height = self.storage.get_tip_height().await.map_err(|e| SpvError::Storage(e))?; + match stored_height { + Some(height) => tracing::info!("✅ Genesis block initialized at height 0, storage reports tip height: {}", height), + None => tracing::info!("✅ Genesis block initialized at height 0, no existing headers in storage"), + }
990-1026: Well-structured delegation to MessageHandler and SequentialSyncManager.The message handling delegation pattern is clean and maintainable. Consider documenting the interaction flow between Client, MessageHandler, and SequentialSyncManager for future maintainers.
dash-spv/src/storage/mod.rs (1)
29-48: Consider making ChainStorage methods consistent regarding mutability.The trait has
store_headertaking&selfwhile all otherStorageManagerstore methods take&mut self. This inconsistency could be confusing.Consider either:
- Make
store_headertake&mut selffor consistency, or- Document why
ChainStorageallows concurrent writes through&selfIf interior mutability is intended for performance, document this design choice.
dash-spv/src/chain/reorg.rs (1)
340-398: Expand test coverage for reorganization handling.The current tests only cover validation logic. Consider adding tests for:
- Actual reorganization execution
- Chain lock enforcement
- Common ancestor finding
- Transaction rollback behavior
Would you like me to generate additional test cases covering these scenarios?
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (83)
.gitignore(1 hunks)CLAUDE.md(1 hunks)dash-network-ffi/src/dash_network_ffi.swift(1 hunks)dash-network-ffi/src/dash_network_ffiFFI.h(1 hunks)dash-network-ffi/src/dash_network_ffiFFI.modulemap(1 hunks)dash-spv-ffi/Cargo.toml(1 hunks)dash-spv-ffi/cbindgen.toml(1 hunks)dash-spv-ffi/include/dash_spv_ffi.h(8 hunks)dash-spv-ffi/src/callbacks.rs(2 hunks)dash-spv-ffi/src/client.rs(11 hunks)dash-spv-ffi/src/config.rs(2 hunks)dash-spv-ffi/src/error.rs(1 hunks)dash-spv-ffi/src/lib.rs(1 hunks)dash-spv-ffi/src/types.rs(4 hunks)dash-spv-ffi/src/wallet.rs(5 hunks)dash-spv-ffi/tests/test_client.rs(1 hunks)dash-spv-ffi/tests/test_event_callbacks.rs(1 hunks)dash-spv-ffi/tests/test_mempool_tracking.rs(1 hunks)dash-spv-ffi/tests/test_types.rs(1 hunks)dash-spv-ffi/tests/unit/test_async_operations.rs(5 hunks)dash-spv-ffi/tests/unit/test_client_lifecycle.rs(2 hunks)dash-spv-ffi/tests/unit/test_type_conversions.rs(1 hunks)dash-spv/CLAUDE.md(4 hunks)dash-spv/Cargo.toml(2 hunks)dash-spv/README.md(2 hunks)dash-spv/data/mainnet/mod.rs(1 hunks)dash-spv/data/testnet/mod.rs(1 hunks)dash-spv/docs/TERMINAL_BLOCKS.md(1 hunks)dash-spv/docs/utxo_rollback.md(1 hunks)dash-spv/examples/test_genesis.rs(1 hunks)dash-spv/examples/test_terminal_blocks.rs(1 hunks)dash-spv/scripts/fetch_terminal_blocks.py(1 hunks)dash-spv/src/bloom/builder.rs(1 hunks)dash-spv/src/bloom/manager.rs(1 hunks)dash-spv/src/bloom/mod.rs(1 hunks)dash-spv/src/bloom/stats.rs(1 hunks)dash-spv/src/chain/chain_tip.rs(1 hunks)dash-spv/src/chain/chain_work.rs(1 hunks)dash-spv/src/chain/chainlock_manager.rs(1 hunks)dash-spv/src/chain/chainlock_test.rs(1 hunks)dash-spv/src/chain/checkpoints.rs(1 hunks)dash-spv/src/chain/fork_detector.rs(1 hunks)dash-spv/src/chain/mod.rs(1 hunks)dash-spv/src/chain/orphan_pool.rs(1 hunks)dash-spv/src/chain/reorg.rs(1 hunks)dash-spv/src/client/block_processor.rs(8 hunks)dash-spv/src/client/config.rs(5 hunks)dash-spv/src/client/consistency.rs(2 hunks)dash-spv/src/client/filter_sync.rs(3 hunks)dash-spv/src/client/message_handler.rs(14 hunks)dash-spv/src/client/mod.rs(35 hunks)dash-spv/src/client/status_display.rs(1 hunks)dash-spv/src/client/watch_manager.rs(0 hunks)dash-spv/src/error.rs(2 hunks)dash-spv/src/lib.rs(1 hunks)dash-spv/src/main.rs(2 hunks)dash-spv/src/mempool_filter.rs(1 hunks)dash-spv/src/network/connection.rs(15 hunks)dash-spv/src/network/constants.rs(1 hunks)dash-spv/src/network/handshake.rs(11 hunks)dash-spv/src/network/message_handler.rs(5 hunks)dash-spv/src/network/mock.rs(1 hunks)dash-spv/src/network/mod.rs(4 hunks)dash-spv/src/network/multi_peer.rs(26 hunks)dash-spv/src/network/reputation.rs(1 hunks)dash-spv/src/network/reputation_tests.rs(1 hunks)dash-spv/src/storage/disk.rs(10 hunks)dash-spv/src/storage/memory.rs(6 hunks)dash-spv/src/storage/mod.rs(2 hunks)dash-spv/src/storage/sync_state.rs(1 hunks)dash-spv/src/storage/sync_storage.rs(1 hunks)dash-spv/src/storage/types.rs(1 hunks)dash-spv/src/sync/filters.rs(15 hunks)dash-spv/src/sync/headers.rs(11 hunks)dash-spv/src/sync/headers2_state.rs(1 hunks)dash-spv/src/sync/headers_with_reorg.rs(1 hunks)dash-spv/src/sync/masternodes.rs(7 hunks)dash-spv/src/sync/mod.rs(10 hunks)dash-spv/src/sync/sequential/mod.rs(1 hunks)dash-spv/src/sync/sequential/phases.rs(1 hunks)dash-spv/src/sync/sequential/progress.rs(1 hunks)dash-spv/src/sync/sequential/recovery.rs(1 hunks)dash-spv/src/sync/sequential/request_control.rs(1 hunks)
💤 Files with no reviewable changes (1)
- dash-spv/src/client/watch_manager.rs
🧰 Additional context used
🧬 Code Graph Analysis (17)
dash-spv/src/main.rs (1)
dash-spv/src/client/mod.rs (1)
stop(539-572)
dash-spv/scripts/fetch_terminal_blocks.py (1)
dash-spv/examples/test_terminal_blocks.rs (1)
main(6-41)
dash-spv-ffi/src/wallet.rs (1)
dash-spv/src/types.rs (1)
address(408-413)
dash-spv/src/chain/chain_work.rs (2)
dash/src/blockdata/block.rs (2)
target(67-69)work(96-98)dash/src/blockdata/constants.rs (1)
genesis_block(149-222)
dash-spv/src/sync/sequential/progress.rs (2)
dash-spv/src/sync/sequential/mod.rs (2)
new(67-88)start_sync(116-149)dash-spv/src/sync/sequential/phases.rs (1)
progress(194-359)
dash-spv/data/mainnet/mod.rs (1)
dash-spv/src/sync/terminal_block_data/mainnet.rs (1)
load_mainnet_terminal_blocks(8-176)
dash-spv-ffi/src/config.rs (2)
dash-spv-ffi/src/error.rs (1)
set_last_error(25-30)dash-spv-ffi/include/dash_spv_ffi.h (8)
dash_spv_ffi_config_set_mempool_tracking(322-322)dash_spv_ffi_config_set_mempool_strategy(324-325)dash_spv_ffi_config_set_max_mempool_transactions(327-328)dash_spv_ffi_config_set_mempool_timeout(330-331)dash_spv_ffi_config_set_fetch_mempool_transactions(333-334)dash_spv_ffi_config_set_persist_mempool(336-336)dash_spv_ffi_config_get_mempool_tracking(338-338)dash_spv_ffi_config_get_mempool_strategy(340-340)
dash-spv/src/network/mod.rs (3)
dash-spv/src/network/multi_peer.rs (6)
get_peer_best_height(1061-1095)has_peer_with_service(1097-1111)get_peers_with_service(1113-1128)has_headers2_peer(1130-1142)new(65-97)peer_info(978-991)dash-spv/src/network/connection.rs (2)
new(50-69)peer_info(431-441)dash-spv/src/network/handshake.rs (1)
new(47-59)
dash-spv/data/testnet/mod.rs (1)
dash-spv/src/sync/terminal_block_data/testnet.rs (1)
load_testnet_terminal_blocks(8-112)
dash-spv/src/mempool_filter.rs (3)
dash-spv/src/types.rs (10)
new(182-184)new(937-957)txid(960-962)address(408-413)record_send(1044-1046)prune_expired(1019-1041)default(63-79)default(167-177)default(327-329)default(643-668)dash-spv/src/wallet/mod.rs (4)
new(77-85)new(110-119)is_transaction_relevant(165-187)default(103-105)dash-spv/src/client/mod.rs (1)
new(214-303)
dash-spv/src/chain/chain_tip.rs (1)
dash-spv/src/chain/chain_work.rs (2)
from_bytes(107-109)as_bytes(102-104)
dash-spv/src/bloom/builder.rs (1)
dash-spv/src/bloom/manager.rs (7)
new(76-85)add_address(137-159)add_outpoint(162-180)add_data(183-200)extract_pubkey_hash(284-298)outpoint_to_bytes(302-307)default(31-40)
dash-spv/src/sync/sequential/request_control.rs (2)
dash-spv/src/sync/sequential/mod.rs (1)
new(67-88)dash-spv/src/sync/sequential/phases.rs (1)
name(117-127)
dash-spv/src/sync/headers_with_reorg.rs (12)
dash-spv/src/chain/checkpoints.rs (4)
mainnet_checkpoints(248-371)testnet_checkpoints(374-456)default(70-76)new(91-107)dash-spv/src/chain/chain_work.rs (2)
default(137-139)from_height_and_header(58-78)dash-spv/src/chain/orphan_pool.rs (4)
default(239-241)new(39-41)len(144-146)stats(178-200)dash-spv/src/chain/chain_tip.rs (2)
new(27-35)new(50-56)dash-spv/src/chain/chainlock_manager.rs (1)
new(43-51)dash-spv/src/chain/fork_detector.rs (1)
new(21-26)dash-spv/src/chain/reorg.rs (1)
new(42-48)dash-spv/src/sync/sequential/request_control.rs (1)
new(63-86)dash-spv/src/sync/sequential/mod.rs (2)
new(67-88)reset_pending_requests(1373-1386)dash-spv/src/types.rs (2)
new_for_network(187-220)tip_height(223-225)dash-spv/src/sync/headers2_state.rs (2)
bandwidth_savings(129-137)compression_ratio(120-126)dash-spv/src/sync/sequential/phases.rs (1)
is_syncing(130-132)
dash-spv/src/sync/sequential/mod.rs (5)
dash-spv/src/sync/sequential/phases.rs (2)
progress(194-359)name(117-127)dash-spv/src/sync/sequential/recovery.rs (1)
new(76-89)dash-spv/src/sync/sequential/request_control.rs (1)
new(63-86)dash-spv/src/sync/sequential/transitions.rs (1)
new(17-21)dash-spv/src/sync/headers_with_reorg.rs (5)
new(66-97)new(688-690)load_headers_from_storage(100-166)handle_headers2_message(385-413)handle_headers_message(169-221)
dash-spv-ffi/src/callbacks.rs (2)
dash-spv-ffi/include/dash_spv_ffi.h (6)
height(106-106)txid(108-113)txid(117-121)txid(123-126)txid(128-128)confirmed(115-115)dash-spv-ffi/src/types.rs (2)
new(14-19)new(258-270)
dash-spv/src/storage/disk.rs (7)
dash-spv/src/client/mod.rs (2)
storage(561-561)new(214-303)dash-spv/src/client/message_handler.rs (1)
new(33-61)dash-spv/src/mempool_filter.rs (1)
new(30-44)dash-spv/src/types.rs (5)
new(182-184)new(937-957)serialize(437-467)serialize(711-723)txid(960-962)dash-spv/src/storage/memory.rs (22)
new(39-54)store_sync_state(337-343)load_sync_state(345-354)clear_sync_state(356-361)store_sync_checkpoint(363-368)get_sync_checkpoints(370-388)store_chain_lock(390-395)load_chain_lock(397-406)get_chain_locks(408-426)store_instant_lock(428-433)load_instant_lock(435-444)store_terminal_block(446-449)load_terminal_block(451-453)get_all_terminal_blocks(455-459)has_terminal_block(461-463)store_mempool_transaction(466-469)remove_mempool_transaction(471-474)get_mempool_transaction(476-478)get_all_mempool_transactions(480-482)store_mempool_state(484-487)load_mempool_state(489-491)clear_mempool(493-497)dash-spv/src/storage/mod.rs (21)
store_sync_state(136-136)load_sync_state(139-139)clear_sync_state(142-142)store_sync_checkpoint(145-145)get_sync_checkpoints(148-148)store_chain_lock(151-151)load_chain_lock(154-154)get_chain_locks(157-157)store_instant_lock(160-160)load_instant_lock(163-163)store_terminal_block(166-166)load_terminal_block(169-169)get_all_terminal_blocks(172-172)has_terminal_block(175-175)store_mempool_transaction(179-179)remove_mempool_transaction(182-182)get_mempool_transaction(185-185)get_all_mempool_transactions(188-188)store_mempool_state(191-191)load_mempool_state(194-194)clear_mempool(197-197)dash-spv/src/wallet/utxo.rs (1)
serialize(84-99)
🪛 LanguageTool
CLAUDE.md
[typographical] ~185-~185: If you want to indicate numerical ranges or time ranges, consider using an en dash.
Context: ... - Support for Dash Core versions 0.18.0 - 0.21.0 ## Security Considerations - Th...
(DASH_RULE)
dash-spv/CLAUDE.md
[uncategorized] ~104-~104: If this is a compound adjective that modifies the following noun, use a hyphen.
Context: ...: Filters** - Download specific filters on demand - Phase 5: Blocks - Download blocks...
(EN_COMPOUND_ADJECTIVE_INTERNAL)
🪛 Ruff (0.11.9)
dash-spv/scripts/fetch_terminal_blocks.py
153-153: f-string without any placeholders
Remove extraneous f prefix
(F541)
155-155: f-string without any placeholders
Remove extraneous f prefix
(F541)
156-156: f-string without any placeholders
Remove extraneous f prefix
(F541)
157-157: f-string without any placeholders
Remove extraneous f prefix
(F541)
158-158: f-string without any placeholders
Remove extraneous f prefix
(F541)
🪛 Flake8 (7.2.0)
dash-spv/scripts/fetch_terminal_blocks.py
[error] 39-39: expected 2 blank lines, found 1
(E302)
[error] 61-61: expected 2 blank lines, found 1
(E302)
[error] 98-98: expected 2 blank lines, found 1
(E302)
[error] 153-153: f-string is missing placeholders
(F541)
[error] 155-155: f-string is missing placeholders
(F541)
[error] 156-156: f-string is missing placeholders
(F541)
[error] 157-157: f-string is missing placeholders
(F541)
[error] 158-158: f-string is missing placeholders
(F541)
[error] 164-164: expected 2 blank lines after class or function definition, found 1
(E305)
🪛 Pylint (3.3.7)
dash-spv/scripts/fetch_terminal_blocks.py
[refactor] 48-51: Unnecessary "else" after "return", remove the "else" and de-indent the code inside it
(R1705)
🔇 Additional comments (140)
dash-spv-ffi/src/lib.rs (1)
20-20: Minor formatting improvement looks good.The added blank line improves visual separation in the re-exports section.
dash-spv-ffi/tests/test_types.rs (1)
90-90: Test update correctly reflects struct changes.The addition of
filter_sync_available: truealigns with the updatedSyncProgressstruct definition and ensures test compatibility.dash-spv-ffi/tests/unit/test_type_conversions.rs (1)
148-148: Consistent test update maintains compatibility.Good consistency with the same field addition pattern used in other test files.
dash-spv-ffi/src/error.rs (1)
57-57: ```shell
#!/bin/bashDisplay the SpvError enum definition with its variants
rg -n -A50 'pub enum SpvError' dash-spv/src/error.rs
</details> <details> <summary>dash-spv-ffi/Cargo.toml (1)</summary> `24-25`: **Verify that new logging dependencies are actually used.** The addition of `env_logger` and `tracing` as runtime dependencies should correspond to actual usage in the codebase to avoid unnecessary bloat. Let me check if these dependencies are being used in the source code: ```shell #!/bin/bash # Search for usage of env_logger and tracing in the source files echo "Searching for env_logger usage:" rg -n "env_logger" --type rust echo -e "\nSearching for tracing usage:" rg -n "tracing::|use tracing" --type rust echo -e "\nSearching for log macros that might use these dependencies:" rg -n "trace!|debug!|info!|warn!|error!" --type rust.gitignore (1)
29-32: LGTM! Standard library artifacts correctly ignored.The added patterns properly cover native library artifacts across all major platforms, which aligns with the FFI enhancements in this PR.
dash-spv/src/client/consistency.rs (2)
75-75: Excellent refactoring! Simplified error mapping.Using direct function pointer
SpvError::Storageis more idiomatic than the equivalent closure|e| SpvError::Storage(e).
178-178: Consistent improvement in error handling.Good consistency with the same error mapping simplification pattern used elsewhere in the method.
dash-spv/src/client/filter_sync.rs (2)
70-70: Consistent error mapping improvements.Both changes follow the same beneficial pattern of simplifying error mapping from closures to direct function pointers, making the code more idiomatic and readable.
Also applies to: 120-120
157-157: Good consistency in error handling refactoring.The same error mapping simplification pattern applied to
SpvError::Sync, maintaining consistency with the other error mapping improvements in this file.dash-spv-ffi/cbindgen.toml (1)
13-13: Configuration change aligns with FFI callback refactoring.The exclusion of Option-wrapped callback types is appropriate for the new explicit callback pointer usage in the FFI layer, preventing complex Option handling across language boundaries.
dash-network-ffi/src/dash_network_ffiFFI.modulemap (1)
1-7: Well-formed modulemap for Darwin FFI integration.The module definition follows standard conventions with appropriate system dependencies for Darwin, stdbool, and stdint modules.
dash-spv-ffi/tests/unit/test_client_lifecycle.rs (2)
95-95: Test updated correctly for new FFI API signature.The change from a callbacks struct to separate callback parameters aligns with the FFI refactoring described in the summary.
231-231: Consistent API usage across test functions.The test correctly uses the new function signature with explicit callback parameters.
dash-spv/src/main.rs (3)
152-152: Informative logging for new sync strategy.The log message clearly indicates the sequential synchronization approach, which aligns with the architectural changes described in the PR.
389-398: Enhanced shutdown handling improves robustness.The improved signal handler provides:
- More specific logging with "(Ctrl-C)" indication
- Explicit client stop with proper error handling
- Early return to prevent duplicate shutdown operations
This aligns well with the client's async
stop()method that handles sync state saving and storage shutdown.
402-402: Clear documentation of shutdown flow.The comment helpfully clarifies when this client stop occurs versus the signal handler stop.
dash-spv/Cargo.toml (3)
13-13: BLS feature enablement supports cryptographic operations.Adding the "bls" feature to the dashcore dependency enables BLS signature functionality required for Dash's cryptographic operations.
41-41: Hex utility promotion to main dependencies.Moving hex from dev-dependencies to main dependencies indicates it's now used in production code, likely for the FFI layer or cryptographic operations described in the PR.
16-17: Verify BLS library version for security and compatibility.The blsful 2.5 dependency provides BLS signature support. Please verify this is the latest stable version and check for any security advisories.
What is the latest version of the blsful Rust crate and are there any known security issues with version 2.5?dash-spv/src/network/constants.rs (1)
6-8: Approve peer count increases for enhanced network robustness.The significant increase in peer counts (MIN_PEERS: 2→3, TARGET_PEERS: 3→8, MAX_PEERS: 5→12) aligns well with the introduction of peer reputation system and multi-peer management enhancements mentioned in the PR summary. The compile-time assertions ensure the ordering invariant is maintained.
dash-spv/src/error.rs (1)
27-28: LGTM! New general error variant enhances error handling.The addition of
General(String)variant toSpvErrorprovides flexibility for handling miscellaneous error cases that don't fit other specific categories.dash-spv-ffi/tests/test_client.rs (1)
213-261: Well-structured diagnostic test with proper error handling.The new
test_sync_diagnostictest function properly:
- Sets up an isolated test environment with temporary directory
- Enables test mode for deterministic behavior
- Handles start/sync errors gracefully with detailed error reporting
- Performs proper resource cleanup
This test effectively validates the new diagnostic sync functionality introduced in the FFI layer.
dash-spv/examples/test_genesis.rs (1)
1-33: Useful example for genesis block verification.This example effectively demonstrates the usage of
genesis_block()function and provides a practical tool for verifying genesis block generation. The inclusion of expected values for testnet enables easy verification of correctness.dash-spv/data/testnet/mod.rs (2)
1-2: Verify auto-generation process and maintenance.Since this is auto-generated code, ensure the generation script properly handles updates and that the process is documented for maintainability.
5-110: Check for potential function duplication.The
load_testnet_terminal_blocksfunction appears to duplicate functionality found indash-spv/src/sync/terminal_block_data/testnet.rswith the same name. Both functions load terminal block data but use different file path approaches.Run the following script to verify if there are duplicate functions and their usage:
#!/bin/bash # Description: Check for duplicate terminal block loading functions # Search for the function definition echo "=== Function definitions ===" rg -A 5 "fn load_testnet_terminal_blocks" echo -e "\n=== Function calls ===" rg "load_testnet_terminal_blocks" echo -e "\n=== File paths used ===" rg "terminal_block_.*\.json"dash-spv/src/bloom/mod.rs (1)
1-9: Well-structured bloom filter module organization.The module provides a clean separation of concerns with logical submodules for building, managing, and monitoring bloom filters. The public re-exports create a convenient API surface.
dash-spv-ffi/tests/unit/test_async_operations.rs (3)
83-83: Verify null callback handling.The test correctly validates that the FFI function handles null callbacks gracefully. The new API with individual callback parameters appears to be implemented consistently.
107-107: Consistent callback API usage.The updated function signature correctly accepts individual callback function pointers rather than a callbacks struct, maintaining API consistency.
130-130: Consistent callback parameter passing.All test functions have been updated to use the new callback API consistently, passing progress and completion callbacks as separate parameters with user data.
Also applies to: 162-162, 245-245
dash-spv/src/storage/types.rs (2)
17-20: Good backward-compatible extension.The optional
terminal_block_hashfield maintains backward compatibility while enabling terminal block association with masternode states.
22-39: Well-designed terminal block storage structure.The
StoredTerminalBlockstruct contains all necessary fields for terminal block metadata with appropriate types. The use ofOptionfor optional fields likemasternode_list_merkle_rootandmasternode_list_stateprovides flexibility.dash-spv/data/mainnet/mod.rs (2)
1-1: Auto-generated code indicator.The comment clearly indicates this is auto-generated code, which helps reviewers understand the repetitive structure.
8-12: Verify terminal block data file paths exist.Ensure all referenced JSON files exist in the same directory as this module. Missing files would cause compilation failures.
#!/bin/bash # Description: Verify that all terminal block JSON files referenced in the code exist # Expected: All files should exist in the dash-spv/data/mainnet/ directory echo "Checking for terminal block JSON files..." cd dash-spv/data/mainnet/ 2>/dev/null || { echo "Directory dash-spv/data/mainnet/ not found"; exit 1; } missing_files=0 for height in 1088640 1100000 1150000 1200000 1250000 1300000 1350000 1400000 1450000 1500000 1550000 1600000 1650000 1700000 1720000 1750000 1800000 1850000 1900000 1950000 2000000; do file="terminal_block_${height}.json" if [ ! -f "$file" ]; then echo "Missing: $file" missing_files=$((missing_files + 1)) else echo "Found: $file" fi done if [ $missing_files -gt 0 ]; then echo "Warning: $missing_files terminal block files are missing" exit 1 else echo "All terminal block files found" fiAlso applies to: 16-20, 24-28
dash-spv/examples/test_terminal_blocks.rs (1)
1-41: Well-structured example demonstrating terminal block functionality.The example effectively demonstrates the
TerminalBlockManagerAPI with clear output formatting and comprehensive testing of both data availability and lookup functions.dash-spv/README.md (2)
26-26: LGTM! Clear feature documentation.The addition of the peer reputation system to the feature list is well-placed and accurately describes the new functionality.
88-100: Excellent comprehensive documentation of the peer reputation system.The new section provides a clear and thorough overview of the peer reputation system's capabilities, including automatic tracking, configurable thresholds, banning, decay, persistence, and smart peer selection. The reference to detailed documentation is helpful for users who need more information.
CLAUDE.md (1)
1-204: Excellent comprehensive project documentation.The complete rewrite transforms this from a SPARC methodology guide into a comprehensive project reference that covers all essential aspects:
- Clear project overview and architecture
- Detailed repository structure breakdown
- Comprehensive build and test commands
- Key features and current status
- Security considerations and limitations
This documentation will be invaluable for developers working with the rust-dashcore project.
dash-spv/CLAUDE.md (3)
18-18: Accurate documentation of the sequential sync architecture.The updates correctly reflect the new sequential sync manager approach, showing how it replaces the previous centralized model with a phase-based synchronization system.
Also applies to: 27-29
100-108: Clear explanation of the sequential sync phases.The documentation effectively explains the five-phase sequential synchronization approach and its benefits for consistency and error recovery. The phase-by-phase breakdown helps developers understand the sync coordination logic.
154-160: Well-documented workflow for the sequential sync approach.The updated workflow description clearly explains how developers should work with the new sequential sync system, from client creation through monitoring progress.
dash-spv/src/network/reputation_tests.rs (1)
1-118: Excellent comprehensive test coverage for the peer reputation system.This test suite provides thorough coverage of all key reputation system functionality:
- Basic operations: Score initialization, misbehavior penalties, and positive behavior rewards
- Banning mechanism: Threshold-based banning with proper accumulation logic
- Persistence: Data survival across manager instances using temporary files
- Peer selection: Prioritization of well-behaved peers in selection algorithms
- Connection tracking: Proper recording of connection attempts and successes
The tests use proper async/await patterns, realistic test data, and clear assertions. The test structure is well-organized with descriptive names and good separation of concerns.
dash-spv/src/chain/chainlock_test.rs (1)
1-103: Well-structured test suite for ChainLock functionality.This test suite provides solid coverage of the ChainLockManager's core functionality:
- Basic processing: Verifies ChainLock storage, retrieval, and validation flow
- Superseding behavior: Ensures multiple ChainLocks coexist properly and highest is tracked correctly
- Reorganization protection: Validates security boundaries to prevent reorgs past ChainLocked heights
The tests use appropriate test data with realistic heights (1000, 2000, 3000) and proper hash generation. The reorganization protection test effectively validates the security model by testing both allowed and disallowed reorg scenarios. The use of MemoryStorageManager and proper async patterns follows testing best practices.
dash-spv/src/chain/chain_work.rs (1)
178-181: ```shell
#!/bin/bashDisplay the full combine implementation to verify carry logic
rg -n -C20 "pub fn combine" dash-spv/src/chain/chain_work.rs
</details> <details> <summary>dash-spv/src/chain/mod.rs (1)</summary> `1-44`: **Well-structured chain management module.** The module organization is clean and the Fork struct appropriately captures the essential information needed for fork handling. The re-exports provide a clear public API for chain management functionality. </details> <details> <summary>dash-spv/src/network/message_handler.rs (3)</summary> `45-52`: **Excellent addition of Headers2 support.** The implementation correctly handles the new compressed headers protocol with proper statistics tracking and result variants. --- `82-89`: **Proper handling of ChainLock and InstantLock messages.** The implementation correctly processes these important Dash-specific message types with appropriate statistics tracking. --- `170-172`: **Good addition of statistics for new message types.** The statistics structure is properly extended to track the new Headers2 and SendHeaders2 message types, maintaining consistency with the existing pattern. </details> <details> <summary>dash-spv/src/client/block_processor.rs (2)</summary> `42-43`: **Good addition of event emission capability.** The event channel integration allows for proper notification of block processing events to external consumers. --- `548-550`: ```shell #!/bin/bash # Inspect AddressBalance definition and documentation rg -A5 -B5 "pub struct AddressBalance" dash-spv/src/types.rs # Inspect MempoolBalance definition and documentation rg -A5 -B5 "pub struct MempoolBalance" dash-spv/src/types.rsdash-spv/scripts/fetch_terminal_blocks.py (1)
16-37: Well-structured script with proper error handling.The script demonstrates good practices:
- Clear separation of concerns between functions
- Proper error handling for subprocess calls and JSON parsing
- Validation of command-line arguments
- Organized output directory structure
- Generation of both data files and Rust integration code
The logic for fetching masternode data and generating embedded Rust modules is sound and aligns well with the terminal blocks architecture described in the documentation.
Also applies to: 39-97
dash-spv-ffi/tests/test_mempool_tracking.rs (3)
8-25: Well-designed callback structure for FFI testing.The
TestCallbacksstruct with atomic counters and the corresponding extern "C" callback functions provide a clean way to test mempool event propagation through the FFI boundary. The use ofArc<Mutex<u32>>ensures thread-safe access to the counters.
48-91: Comprehensive mempool configuration testing.This test covers all the essential mempool configuration options:
- Enabling mempool tracking
- Setting strategy to
FetchAll- Configuring transaction limits and timeouts
- Validating configuration getters
The test properly creates and destroys resources, preventing memory leaks.
115-133: Proper callback registration and cleanup.The callback setup correctly:
- Creates a boxed
TestCallbacksinstance- Converts to raw pointer for FFI
- Sets up the
FFIEventCallbacksstructure- Cleans up the raw pointer after use
This demonstrates proper FFI resource management.
dash-spv/docs/TERMINAL_BLOCKS.md (1)
1-149: Excellent comprehensive documentation.This documentation provides a thorough explanation of the terminal blocks system:
Strengths:
- Clear overview and benefits explanation
- Detailed technical implementation with concrete examples
- Security considerations and limitations are well-documented
- Practical usage instructions with code examples
- Proper data structure documentation with JSON examples
Technical Accuracy:
- The sync flow description aligns with the implementation
- Terminal block heights match those defined in the Python script
- Data structure format matches the expected JSON output
- Security considerations are realistic and well-thought-out
The documentation effectively serves as both a user guide and technical reference for the terminal blocks feature.
dash-spv-ffi/src/wallet.rs (3)
36-36: Improved constructor usage for WatchItem::Address.Using
WatchItem::address(addr)instead of manual struct construction is cleaner and more maintainable. This approach leverages the provided constructor method that properly setsearliest_height: None.Also applies to: 74-74
96-97: Appropriate mempool fields added to FFIBalance.The addition of
mempoolandmempool_instantfields expands the FFI balance structure to support the new mempool tracking functionality. These fields align with the mempool tracking features being introduced throughout the codebase.
107-109: Correct implementation of mempool field mapping.The
Fromimplementations properly map the internal balance types to the FFI structure:
- For
Balance: Mapsbalance.mempoolandbalance.mempool_instantdirectly- For
AddressBalance: Mapsbalance.pendingto bothpendingandmempool, andbalance.pending_instanttomempool_instant- Uses
balance.total()method instead of manual calculation, which is more reliableThe mapping logic correctly handles the differences between internal balance types while providing consistent FFI interface.
Also applies to: 120-123
dash-spv/src/sync/headers2_state.rs (4)
25-47: Well-defined error handling for Headers2 processing.The
ProcessErrorenum provides clear categorization of compression-related errors:
FirstHeaderNotFullenforces the protocol requirementDecompressionErrorprovides detailed context with header indexThe error implementations with
DisplayandErrortraits follow Rust best practices.
49-77: Solid state management design.The
Headers2StateManagerprovides:
- Efficient per-peer state tracking using
HashMap<PeerId, CompressionState>- Comprehensive statistics collection
- Clean API with
get_state()method usingentry().or_insert_with()The statistics fields appropriately track compression efficiency metrics.
79-112: Robust header processing with proper validation.The
process_headersmethod implements critical validation:
- Ensures first header is uncompressed (line 86-88)
- Maintains statistics during processing
- Provides detailed error context with header index
- Handles empty input gracefully
This aligns with the Headers2 protocol requirements and provides good error diagnostics.
172-262: Comprehensive test coverage.The tests effectively cover:
- Basic header processing and decompression
- First header validation requirement
- Peer state reset functionality
- Statistics calculation with edge cases
The test helper
create_test_header()and use of compression state provide realistic test scenarios. The tests validate both success and error paths appropriately.dash-spv-ffi/tests/test_event_callbacks.rs (1)
139-142: Good test design with clear expectations.The test correctly acknowledges that event reception depends on network conditions and focuses on testing the setup functionality rather than guaranteeing events will fire. This is a solid approach for integration tests.
dash-spv/src/network/mock.rs (3)
69-98: Efficient header lookup implementation.The
process_getheadersmethod correctly implements the Bitcoin protocol's header locator logic. The early break optimization and 2000 header limit align with protocol specifications.
190-192: Consider exposing the message sender for testing.The
get_message_sendermethod provides good testing capability by allowing external injection of messages into the mock network manager's queue.
46-66: Gathering mock implementation details and usage:#!/bin/bash # Show mock.rs implementation around add_headers_chain sed -n '1,150p' dash-spv/src/network/mock.rs # Find all call sites of add_headers_chain rg "add_headers_chain" -n .dash-spv-ffi/src/config.rs (2)
222-232: Consistent and safe FFI implementation pattern.The mempool configuration functions follow a solid pattern with proper null checks and error handling. The implementation correctly uses the
null_check!macro and returns appropriate success codes.
307-316: Good default fallback in getter function.The
get_mempool_strategyfunction properly handles null pointers by returning a sensible default value (FFIMempoolStrategy::Selective), which is better than panicking or returning an invalid value.dash-spv/docs/utxo_rollback.md (3)
1-6: Excellent documentation structure and overview.The documentation provides a clear and comprehensive overview of the UTXO rollback mechanism, explaining its purpose and integration with blockchain reorganizations. The structure is logical and easy to follow.
21-29: I’ll search for these methods and related definitions across the repo to confirm their presence and signatures.#!/bin/bash # Search for occurrences and definitions of the documented rollback APIs rg 'with_rollback' -n -C3 . rg 'init_rollback_from_storage' -n -C3 . rg 'impl WalletState' -n -A5 . rg 'UTXORollbackManager' -n -A5 .
150-163: ```shell
#!/bin/bashList any Rust test files under tests/ containing "rollback" in their filename
fd -t f -e rs tests | grep -i rollback
Search within all tests for occurrences of "rollback"
rg -l "rollback" -g "tests/**/*.rs"
</details> <details> <summary>dash-spv/src/network/mod.rs (3)</summary> `77-89`: **Well-designed trait extension with sensible defaults.** The new methods properly extend the NetworkManager trait with useful peer querying capabilities. The default implementation of `has_headers2_peer()` that delegates to `has_peer_with_service()` is a good design pattern. --- `110-110`: **Proper integration of mempool strategy.** The addition of `config.mempool_strategy` parameter to `HandshakeManager::new()` correctly integrates the new mempool functionality into the handshake process. --- `243-252`: **Robust service flag checking implementation.** The implementation correctly handles the optional services field and uses proper service flag bit operations. The fallback to `false` when services are not available is appropriate. </details> <details> <summary>dash-spv/src/client/config.rs (4)</summary> `12-21`: **LGTM!** The `MempoolStrategy` enum is well-designed with clear documentation for each variant's privacy and bandwidth trade-offs. --- `110-130`: **Well-structured mempool configuration fields with sensible defaults.** The new fields follow the existing naming conventions and the default values are appropriate - mempool tracking disabled by default with conservative limits when enabled. Also applies to: 164-171 --- `179-183`: **Good use of struct update syntax.** This simplifies the constructor and ensures consistency with default values. --- `346-354`: **Good cleanup of default peer lists.** Removing localhost peers from the default lists is appropriate for production use. </details> <details> <summary>dash-spv/src/chain/orphan_pool.rs (2)</summary> `1-35`: **Well-designed orphan pool data structures.** The dual indexing approach (by hash and by previous hash) with an eviction queue provides efficient O(1) lookups and FIFO eviction. The constants for max orphans (100) and timeout (15 minutes) are reasonable. --- `244-378`: **Excellent test coverage!** The tests comprehensively cover all major functionality including edge cases like duplicate prevention and max orphan limits. </details> <details> <summary>dash-spv/src/sync/sequential/phases.rs (1)</summary> `9-113`: **Well-designed sync phase enum.** The `SyncPhase` enum comprehensively models all synchronization stages with appropriate state tracking for each phase. </details> <details> <summary>dash-spv/src/chain/fork_detector.rs (2)</summary> `28-139`: **Comprehensive fork detection logic.** The `check_header` method thoroughly handles all edge cases including empty chain state, genesis connections, and chain state/storage synchronization issues. The extensive tracing will be helpful for debugging. --- `141-193`: **Well-implemented fork management.** The fork tracking with automatic eviction of weakest forks and comprehensive accessor methods provides a robust fork management system. </details> <details> <summary>dash-spv/src/mempool_filter.rs (2)</summary> `46-65`: **Clear strategy-based filtering logic.** The `should_fetch_transaction` method correctly implements the three strategies with appropriate capacity checks and recent send tracking. --- `67-125`: **Thorough transaction relevance checking.** The method comprehensively checks against all watched item types and includes excellent debug tracing for troubleshooting. </details> <details> <summary>dash-spv/src/chain/chain_tip.rs (1)</summary> `10-36`: **LGTM! Well-structured chain tip representation.** The `ChainTip` struct properly encapsulates all necessary metadata for tracking blockchain tips, with appropriate field visibility and initialization. </details> <details> <summary>dash-spv-ffi/src/types.rs (1)</summary> `345-355`: **Verify: Loss of associated data in MempoolRemovalReason conversion.** The conversion from internal `MempoolRemovalReason` to `FFIMempoolRemovalReason` discards associated data (e.g., replacement txid for `Replaced`, conflicting txid for `DoubleSpent`). Please confirm this simplification is intentional and that FFI consumers don't need the detailed removal information. If they do need it, consider adding fields to carry this data. </details> <details> <summary>dash-spv/src/sync/sequential/progress.rs (1)</summary> `114-172`: **Well-structured phase tracking logic.** The methods for tracking completed and remaining phases properly handle the conditional nature of sync phases based on enabled features. </details> <details> <summary>dash-network-ffi/src/dash_network_ffiFFI.h (1)</summary> `1-628`: **Auto-generated file - no manual review needed.** This file is auto-generated by the uniffi crate. Any issues should be addressed in the source Rust code that generates this file, not here. </details> <details> <summary>dash-spv/src/sync/masternodes.rs (3)</summary> `16-16`: **LGTM! Terminal block manager integration is well-structured.** The addition of the terminal block manager to the masternode sync manager is clean and follows the existing pattern. Also applies to: 26-26, 48-48 --- `511-538`: **Well-implemented terminal block validation and storage.** The integration of terminal block validation during masternode diff processing and the storage of terminal block hashes in the masternode state is correct and follows good practices. Also applies to: 541-555, 483-483 --- `579-597`: **Clean public API for terminal block access.** The new public methods provide appropriate access to terminal block functionality for external use. </details> <details> <summary>dash-spv/src/bloom/builder.rs (1)</summary> `27-158`: **Well-structured bloom filter builder implementation.** The builder pattern is well-implemented with clear methods for configuration and construction. The `from_wallet` method provides convenient wallet integration. </details> <details> <summary>dash-spv/src/sync/sequential/recovery.rs (1)</summary> `1-151`: **Well-designed recovery management system.** The recovery strategies and error classification logic are comprehensive and well-structured. The exponential backoff implementation is particularly good. </details> <details> <summary>dash-spv/src/storage/sync_state.rs (1)</summary> `1-145`: **Excellent persistent sync state design.** The sync state structure is comprehensive with proper versioning, validation, and recovery suggestions. The checkpoint system is well-thought-out. </details> <details> <summary>dash-spv/src/bloom/manager.rs (1)</summary> `1-85`: **Well-architected bloom filter manager with good concurrency support.** The manager design with Arc<RwLock<>> for concurrent access and comprehensive statistics tracking is excellent. </details> <details> <summary>dash-spv/src/sync/mod.rs (1)</summary> `147-153`: **Inconsistent validation mode checking.** The header sync is skipped when `validation_mode` is `None`, but filter and masternode syncs proceed regardless. This seems inconsistent - if validation is disabled, syncing filters and masternodes may not be meaningful. Consider applying validation checks consistently across all sync types or document why this asymmetry exists. </details> <details> <summary>dash-spv/src/chain/chainlock_manager.rs (1)</summary> `234-237`: **Address the empty method implementation.** The `update_chain_state_with_lock` method is empty with a comment about mutable borrow issues. This suggests a design problem that should be addressed properly. Consider: 1. Removing this method if it's not needed 2. Returning instructions for the caller to execute 3. Refactoring to avoid the mutable borrow issue 4. Using interior mutability if necessary An empty method that relies on the caller to "handle" its responsibility breaks encapsulation and makes the API confusing. </details> <details> <summary>dash-network-ffi/src/dash_network_ffi.swift (1)</summary> `1-3`: **Skipping review of auto-generated file.** This file is auto-generated by the `uniffi` crate as indicated in the header comment. No manual review needed. </details> <details> <summary>dash-spv-ffi/include/dash_spv_ffi.h (3)</summary> `6-11`: **Well-designed enums and progress tracking struct.** The new `FFIMempoolStrategy` and `FFISyncStage` enums provide clear categorization of mempool strategies and sync stages. The `FFIDetailedSyncProgress` struct offers comprehensive sync progress information with all necessary fields. Also applies to: 19-28, 49-61 --- `106-129`: **Improved callback design with detailed parameters.** The migration to typedef'd function pointers improves code clarity, and the additional parameters in callbacks (amount, addresses, block_height) provide valuable context for FFI consumers. --- `207-215`: **Comprehensive FFI API additions for mempool and sync control.** The new functions provide excellent coverage for: - Detailed sync progress tracking with cancellation support - Mempool tracking with various strategies - Configuration options for mempool behavior The consistent use of `int32_t` return types for error handling is appropriate. Also applies to: 283-293, 322-340 </details> <details> <summary>dash-spv-ffi/src/callbacks.rs (1)</summary> `157-193`: **Well-implemented mempool callbacks with proper FFI safety.** The new mempool callbacks follow the established pattern with: - Proper null checking via Option - Safe C string conversion - Consistent logging (though consider debug level as mentioned above) - Address array handling via comma-separated strings is appropriate for FFI </details> <details> <summary>dash-spv/src/sync/headers.rs (1)</summary> `165-195`: **Excellent Headers2 compression support implementation.** The implementation properly: - Manages per-peer compression state - Provides useful compression statistics - Delegates to existing header handling after decompression This will significantly reduce bandwidth usage during header sync. </details> <details> <summary>dash-spv/src/sync/headers_with_reorg.rs (1)</summary> `362-371`: **Headers2 support is disabled.** The compressed headers (Headers2) functionality is implemented but disabled by the hardcoded `false` condition. The TODO comment suggests there's an issue with receiving responses from peers. This disables a bandwidth optimization feature. Consider investigating why GetHeaders2 doesn't receive responses. Possible issues: - Peer doesn't support Headers2 despite advertising it - Message format incompatibility - Network layer not routing Headers2 responses correctly </details> <details> <summary>dash-spv/src/client/message_handler.rs (1)</summary> `124-168`: **Well-structured mempool transaction handling** The implementation correctly handles mempool transactions with proper filtering, state management, persistence, and event emission. Good use of guards and error handling. </details> <details> <summary>dash-spv/src/network/multi_peer.rs (2)</summary> `141-145`: **Good reputation-based connection filtering** Excellent implementation of reputation checking before attempting connections. This prevents wasting resources on known bad peers. --- `1061-1142`: **Well-implemented peer capability queries** The new NetworkManager trait methods provide comprehensive peer querying functionality with proper null safety and service flag checking. </details> <details> <summary>dash-spv/src/network/connection.rs (1)</summary> `571-583`: **Well-implemented Headers2 support detection** The `can_request_headers2` method correctly checks both the service flag and the SendHeaders2 message, ensuring proper protocol negotiation. </details> <details> <summary>dash-spv/src/sync/filters.rs (11)</summary> `172-186`: **Well-designed flow control and availability check methods.** The new public methods provide clean APIs for: - Toggling flow control for filter downloads - Checking if any peer supports compact filters before attempting sync The implementation correctly uses the network manager to check peer capabilities. --- `375-413`: **Robust tip header handling with appropriate fallback logic.** The implementation correctly handles the edge case where the exact tip header might not be available by falling back to the previous header. This prevents sync failures at the chain tip and includes proper error handling for all scenarios. --- `530-569`: **Consistent tip header fallback logic in recovery path.** The recovery path correctly mirrors the normal sync path's tip header handling, maintaining consistency and improving reliability during timeout recovery scenarios. --- `598-602`: **Excellent proactive peer capability checking.** The implementation prevents unnecessary sync attempts by checking if any connected peer supports compact filters (BIP 157/158). The warning messages clearly guide users on what's needed to enable filter sync. --- `1089-1090`: **Good state management practice.** Clearing stale state before starting a new sync session prevents potential issues from previous failed attempts. --- `1106-1107`: **Clear documentation of sync flag lifecycle.** The updated comment properly explains that the `syncing_filters` flag should remain true throughout the download process and will be cleared upon completion or failure. --- `1303-1312`: **Helpful periodic progress logging.** Logging filter sync state every 1000 filters provides good visibility into long-running sync operations without overwhelming the logs. --- `2106-2145`: **Well-structured filter request management methods.** The new utility methods provide a clean API for: - Checking active request counts and available slots - Managing pending filter requests - Sending filter batches with proper flow control Good separation of concerns and clear method names. --- `2222-2235`: **Thoughtful state clearing implementation.** The method correctly clears only the request tracking state while preserving `received_filter_heights`, which represents actual received data. This distinction is important for retry and recovery scenarios. --- `2250-2256`: **Safe implementation of filter count getter.** The method correctly handles potential mutex lock failures by returning 0 instead of panicking, which is appropriate for a monitoring/status method. --- `3123-3136`: **Comprehensive sync state reset method.** The method properly resets all sync-related state and sets `last_sync_progress` to the current time to prevent false timeout detection after reset. Good addition for restart scenarios. </details> <details> <summary>dash-spv-ffi/src/client.rs (5)</summary> `106-192`: **Well-structured event listener implementation** The event listener implementation correctly: - Uses tokio async runtime for event processing - Takes ownership of the event receiver to prevent duplicate listeners - Handles all event types with appropriate logging - Properly manages thread handles The detailed logging with emoji prefixes aids debugging. --- `580-597`: **Clean async implementation for filter sync check** The implementation properly handles the async context and provides safe error handling with appropriate default return values. --- `1270-1306`: **Efficient total balance calculation** The implementation correctly: - Iterates through all watched addresses - Aggregates balances safely - Provides debug logging for troubleshooting - Handles errors gracefully without panicking Good use of the existing API methods. --- `1565-1581`: **Incorrect balance field mapping for mempool** The FFIBalance construction maps mempool balances incorrectly - it sets `confirmed` to 0 but mempool transactions are by definition unconfirmed. Fix the field mapping: ```diff let balance = FFIBalance { - confirmed: 0, // No confirmed balance in mempool + confirmed: 0, + unconfirmed: mempool_balance.pending.to_sat() + mempool_balance.pending_instant.to_sat(), pending: mempool_balance.pending.to_sat(), instantlocked: 0, // No confirmed instantlocked in mempool mempool: mempool_balance.pending.to_sat(), mempool_instant: mempool_balance.pending_instant.to_sat(), - total: mempool_balance.pending.to_sat() + mempool_balance.pending_instant.to_sat(), + total: 0, // Mempool balance should not be included in total confirmed balance };Likely an incorrect or invalid review comment.
416-419: Unsafe pointer to usize conversionConverting a raw pointer to usize for thread safety is a workaround that could cause issues on platforms where pointers don't fit in usize.
Consider using a proper synchronization mechanism or ensure the conversion is safe:
- let user_data_usize = { - let guard = user_data_ptr.lock().unwrap(); - *guard as usize - }; + // Ensure pointer fits in usize + let user_data_usize = { + let guard = user_data_ptr.lock().unwrap(); + *guard as usize + }; + debug_assert_eq!(*guard as usize as *mut c_void, *guard, "Pointer truncation detected");Likely an incorrect or invalid review comment.
dash-spv/src/storage/disk.rs (7)
123-126: Good addition of mempool storage fieldsThe mempool storage fields are properly initialized with thread-safe Arc<RwLock<>> wrappers, consistent with other storage fields.
895-899: Smart handling of padding headers during saveThe code correctly skips default/padding headers when saving to disk, preventing unnecessary disk usage. The condition properly identifies padding headers.
1000-1063: Excellent logging strategyThe differentiated logging levels (trace/debug/info) based on batch size is a thoughtful approach that balances debugging needs with log volume. Single headers use trace, small batches use debug, and large batches use info level.
1133-1152: Proper null header filteringThe get_header method correctly:
- Checks if height is beyond tip before disk access
- Filters out padding headers
- Provides debug logging for missing headers
This prevents returning invalid data to callers.
1383-1385: Complete cleanup in clear methodGood addition of mempool cleanup to ensure all state is properly cleared when the storage is reset.
1530-1534: Atomic file writes for sync stateExcellent use of temporary file + rename pattern for atomic writes. This prevents corruption if the process crashes during write.
1578-1609: Efficient checkpoint range queriesThe implementation properly:
- Handles missing directories gracefully
- Parses heights from filenames safely
- Filters by range before loading
- Sorts results by height
Good defensive programming with proper error handling.
dash-spv/src/client/mod.rs (5)
42-89: LGTM! Well-structured event and progress handling.The new fields and methods for progress tracking and event emission are properly implemented with appropriate ownership semantics and error handling.
320-397: LGTM! Robust sync state restoration with proper error handling.The sync state restoration logic includes comprehensive error handling and appropriate fallback to fresh sync when restoration fails.
1666-1684: Document or deprecate no-op filter sync methods.Converting public API methods to no-ops could confuse users. These methods should either delegate to the sequential sync manager or be properly deprecated.
Please verify the intended behavior:
- Should these methods delegate to the sequential sync manager?
- Should they be marked as deprecated?
- Are there any external consumers relying on these methods?
Consider adding deprecation notices:
#[deprecated(since = "0.x.x", note = "Filter sync is now handled internally by SequentialSyncManager")] pub async fn sync_and_check_filters(...) -> Result<Vec<crate::types::FilterMatch>> { // ... }
793-848: LGTM! Comprehensive progress tracking implementation.The progress emission logic is well-implemented with proper rate calculation, time estimation, and sync stage determination.
2014-2059: LGTM! Well-implemented sync state persistence.The sync state saving logic properly handles checkpointing and includes appropriate error handling.
dash-spv/src/storage/mod.rs (2)
135-143: Good addition of sync state persistence methods.The sync state methods provide essential functionality for resuming interrupted syncs. The separation of store/load/clear operations follows established patterns.
177-197: Comprehensive mempool storage interface.The mempool storage methods provide a complete interface for transaction tracking and state persistence. The separation between individual transaction operations and bulk state operations is well-designed.
dash-spv/src/chain/reorg.rs (1)
280-280: I’ll locate theremove_tipandadd_headerdefinitions via a broader search. This will confirm whether they validate the chain structure internally.#!/bin/bash # Search for remove_tip definition rg -n "fn remove_tip" -C2 # Search for add_header definition rg -n "fn add_header" -C2 # Find ChainState declaration rg -n "struct ChainState" -C2 # Inspect ChainState impl block rg -n "impl ChainState" -C2 -A5
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 6
♻️ Duplicate comments (13)
dash-spv/src/bloom/builder.rs (1)
8-8: LGTM: Code duplication issue resolved.The utility functions are now properly imported from the shared
utilsmodule, addressing the duplication concern from previous reviews.dash-spv-ffi/src/types.rs (2)
162-164: Good fix for timestamp conversion safety.The timestamp conversion now properly handles potential panics by using
unwrap_or(std::time::Duration::from_secs(0))instead of unwrap, addressing the previous review concern.
241-241: Consistent error handling in peer info conversion.The last_seen timestamp conversion also properly handles potential errors with
unwrap_or(std::time::Duration::from_secs(0)), maintaining consistency with other timestamp conversions.dash-spv/src/network/handshake.rs (1)
279-294: Good refactoring of headers2 negotiation.The extraction of the
negotiate_headers2method successfully addresses the previous review comment about duplicate code in version and verack handlers.dash-spv/src/chain/chainlock_manager.rs (2)
144-162: Good optimization of cache eviction performance.The use of
IndexMapand batch removal approach effectively addresses the previous review concern about O(n log n) sorting performance. The current implementation is much more efficient.
111-111: Address error context preservation concern.The current error handling still converts storage errors to strings, losing type information and context as mentioned in the previous review.
Consider preserving the original error context:
- self.store_chain_lock(chain_lock.clone(), storage).await?; + self.store_chain_lock(chain_lock.clone(), storage).await + .map_err(|e| ValidationError::StorageError(e))?;This assumes adding a
StorageErrorvariant toValidationError.dash-spv/src/sync/sequential/mod.rs (3)
163-173: Extract duplicate base hash retrieval logic.The code for retrieving the base hash from storage is duplicated here. Use the existing
get_base_hash_from_storagehelper method instead.
1134-1192: Use genesis hash in locator instead of empty vector.The code uses empty locator vectors as fallback which could result in receiving all headers from genesis. Use the genesis block hash when no headers are stored.
466-485: Incomplete progress reporting implementation.The method returns a
SyncProgresswith mostly zero/default values. The comments indicate these "Will be filled from storage" but no storage queries are performed.dash-spv-ffi/src/client.rs (3)
63-65: Thread safety concern with raw pointer storage remains unaddressed.The raw pointer storage issue from previous reviews persists. Storing
*mut c_voidinArc<Mutex<>>creates use-after-free risks since the pointer lifetime isn't guaranteed.Consider implementing a safer callback registration system that manages user data lifetimes properly.
254-256: Inconsistent callback parameter handling needs verification.The callback parameters are
Option<extern "C" fn(...)>but the code doesn't consistently verifyNonebefore calling. While the current usage appears safe due to the wrapper logic, explicitNonechecks would improve robustness.
478-483: Memory leak risk with CString::forget persists.Using
std::mem::forgetonCStringprevents deallocation, potentially causing memory leaks if the callback doesn't properly manage the string. Document the ownership transfer clearly or manage the string lifetime within Rust.dash-spv/src/network/multi_peer.rs (1)
76-80: Directory creation before loading reputation data is still missingThe past review comment identified this exact issue - the code attempts to load reputation data but doesn't ensure the directory exists first.
🧹 Nitpick comments (16)
dash-spv/src/chain/chain_tip.rs (1)
169-184: Consider edge case in eviction logic.The eviction logic correctly prevents removing the active tip, but there's a potential edge case: if all tips except the active one have the same chain work, the method will consistently pick the first one found by the iterator. While not incorrect, this could lead to non-deterministic behavior.
Consider adding a secondary sorting criterion for consistent eviction:
let weakest = self.tips .iter() .filter(|(hash, _)| self.active_tip.as_ref() != Some(hash)) - .min_by_key(|(_, tip)| &tip.chain_work) + .min_by_key(|(hash, tip)| (&tip.chain_work, *hash)) .map(|(hash, _)| *hash);dash-spv/src/mempool_filter.rs (1)
704-706: Improve time mocking in tests for better coverage.The current test implementation cannot properly test time-based expiration due to hardcoded
Instant::now(). Consider using a time abstraction for better testability.Consider introducing a
TimeProvidertrait that can be mocked in tests:pub trait TimeProvider { fn now(&self) -> Instant; } pub struct SystemTimeProvider; impl TimeProvider for SystemTimeProvider { fn now(&self) -> Instant { Instant::now() } }dash-spv-ffi/src/types.rs (1)
308-333: Consider adding validation for FFI enum conversions.The mempool strategy conversions assume all enum variants are covered. Consider adding validation or default cases to handle potential future additions gracefully.
impl From<FFIMempoolStrategy> for MempoolStrategy { fn from(strategy: FFIMempoolStrategy) -> Self { match strategy { FFIMempoolStrategy::FetchAll => MempoolStrategy::FetchAll, FFIMempoolStrategy::BloomFilter => MempoolStrategy::BloomFilter, FFIMempoolStrategy::Selective => MempoolStrategy::Selective, + // Add explicit handling for unknown variants in the future } } }dash-spv/src/network/handshake.rs (2)
40-42: Consider using an enum for boolean state tracking.Multiple boolean flags (
version_received,verack_received,version_sent) could be error-prone. Consider using a more structured approach.#[derive(Debug, Clone, PartialEq, Eq)] enum HandshakeProgress { Init, VersionSent, VersionReceived, BothExchanged, }
95-106: Optimize message polling loop.The current implementation continues immediately on all branches, which could create a busy loop. Consider adding a small delay when no messages are available.
Ok(Ok(None)) => { - // No message available, continue immediately - // The read timeout already provides the necessary delay + // No message available, short delay to prevent busy loop + tokio::time::sleep(std::time::Duration::from_millis(10)).await; continue; }dash-spv/src/bloom/manager.rs (2)
88-135: Consider optimizing filter creation for large datasets.The current implementation holds read locks while iterating through all tracked elements, which could block other operations. Consider batching or releasing locks periodically for large datasets.
// Consider collecting data first, then creating filter without holding locks let (addresses_data, outpoints_data, data_elements_data) = { let addresses = self.addresses.read().await; let outpoints = self.outpoints.read().await; let data_elements = self.data_elements.read().await; (addresses.clone(), outpoints.clone(), data_elements.clone()) }; // Now create filter without holding locks
271-282: Consider error handling in address processing.The
extract_pubkey_hashutility function could fail, but the error is silently ignored. Consider logging or handling this case more explicitly.// For P2PKH addresses, also add the public key hash if let Some(pubkey_hash) = extract_pubkey_hash(&script) { filter.insert(&pubkey_hash); + } else { + tracing::debug!("Could not extract pubkey hash from address: {}", address); }dash-spv/src/chain/chainlock_manager.rs (1)
165-168: Handle serialization errors more gracefully.The bincode serialization could fail, but the current error mapping might not provide sufficient context for debugging.
let data = bincode::serialize(&chain_lock) - .map_err(|e| StorageError::Serialization(e.to_string()))?; + .map_err(|e| StorageError::Serialization(format!("Failed to serialize ChainLock at height {}: {}", chain_lock.block_height, e)))?;dash-spv/src/sync/sequential/mod.rs (2)
541-547: Enhance error message with transition validation details.The error message for invalid phase transitions could be more informative by including why the transition was rejected.
- return Err(SyncError::SyncFailed(format!( - "Invalid phase transition from {} to {}", - self.current_phase.name(), - next.name() - ))); + return Err(SyncError::SyncFailed(format!( + "Invalid phase transition from {} to {} - transition requirements not met", + self.current_phase.name(), + next.name() + )));
1054-1079: Use phase enum comparison instead of string matching.The phase history search uses string comparison which could break if phase names change. Consider comparing phase enum variants directly.
fn calculate_total_headers_synced(&self) -> u32 { self.phase_history .iter() - .find(|t| t.from_phase == "Downloading Headers") + .find(|t| matches!(t.from_phase_enum, SyncPhase::DownloadingHeaders { .. })) .and_then(|t| t.final_progress.as_ref()) .map(|p| p.items_completed) .unwrap_or(0) }Note: This would require storing the phase enum in the transition record.
dash-spv/src/bloom/stats.rs (1)
153-162: Improve bandwidth saved estimation accuracy.The bandwidth estimation multiplies transaction size by 10 which seems arbitrary. Consider using a more realistic multiplier based on average block transaction counts or make the multiplier configurable.
} else { // Estimate bandwidth saved by not downloading unrelated transactions - // Assume average transaction size if this was a true positive - self.stats.network_impact.bandwidth_saved_bytes += (tx_size * 10) as u64; // Rough estimate + // Estimate based on average transactions per block (~3000) and filter effectiveness (~99%) + // This estimates we avoid downloading ~2970 unrelated transactions per matched transaction + const AVG_UNRELATED_TXS_PER_MATCH: usize = 100; // Conservative estimate + self.stats.network_impact.bandwidth_saved_bytes += (tx_size * AVG_UNRELATED_TXS_PER_MATCH) as u64; }dash-spv/src/sync/masternodes.rs (1)
342-352: Consider handling missing platform data for HP masternodes.High-performance masternode entries are created with zero values for platform data. This limitation should be documented or the terminal block data should be extended to include this information.
Add validation or logging:
1 => EntryMasternodeType::HighPerformance { - platform_http_port: 0, // Not available in stored data - platform_node_id: PubkeyHash::all_zeros(), // Not available in stored data + platform_http_port: 0, // TODO: Not available in stored data + platform_node_id: PubkeyHash::all_zeros(), // TODO: Not available in stored data },Consider logging a warning when HP masternodes are loaded without platform data to track this limitation.
dash-spv/src/network/multi_peer.rs (4)
283-283: Consider reducing log verbosity for message receptionLogging all received messages at debug level could overwhelm logs during active sync periods. Consider using trace level or being more selective about which messages to log.
- // Log all received messages at debug level to help troubleshoot - log::debug!("Received {:?} from {}", msg.cmd(), addr); + // Log received messages at trace level to reduce verbosity + log::trace!("Received {:?} from {}", msg.cmd(), addr);
474-481: Improve connection duration tracking accuracyThe current method of estimating connection duration using
loop_iteration * 60seconds is inaccurate since loop iterations don't have a fixed duration.Consider storing the actual connection start time:
- // Give small positive reputation if peer maintained long connection - let conn_duration = Duration::from_secs(60 * loop_iteration); // Rough estimate - if conn_duration > Duration::from_secs(3600) { // 1 hour + // Give small positive reputation if peer maintained long connection + // Note: This would require storing connection_start_time when peer reader starts + if loop_iteration > 60 { // Rough proxy for long connection (>60 iterations) reputation_manager.update_reputation( addr, positive_scores::LONG_UPTIME, "Long connection uptime", ).await; }
823-834: Consider a more robust PeerId generation strategyThe current hash-based approach for generating PeerIds from SocketAddrs could lead to collisions and inconsistencies. The fallback to
PeerId(0)might also cause issues if multiple calls occur without available peers.Consider maintaining a consistent mapping or using a different approach for PeerId generation that ensures uniqueness and stability.
422-427: Reconsider reputation penalty for unsupported transaction typesApplying reputation penalties for "unknown special transaction type" might be too harsh if these are valid Dash transactions that this SPV client simply doesn't support. Consider distinguishing between truly invalid data and unsupported-but-valid features.
- // Reputation penalty for invalid data - reputation_manager.update_reputation( - addr, - misbehavior_scores::INVALID_TRANSACTION, - "Invalid transaction type in block", - ).await; + // Minor penalty for sending unsupported transaction types + // (may be valid on network but unsupported by this client) + reputation_manager.update_reputation( + addr, + misbehavior_scores::TIMEOUT / 4, // Much smaller penalty + "Unsupported transaction type in block", + ).await;
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (24)
dash-spv-ffi/src/client.rs(11 hunks)dash-spv-ffi/src/types.rs(5 hunks)dash-spv-ffi/tests/unit/test_async_operations.rs(7 hunks)dash-spv/Cargo.toml(2 hunks)dash-spv/src/bloom/builder.rs(1 hunks)dash-spv/src/bloom/manager.rs(1 hunks)dash-spv/src/bloom/mod.rs(1 hunks)dash-spv/src/bloom/stats.rs(1 hunks)dash-spv/src/bloom/utils.rs(1 hunks)dash-spv/src/chain/chain_tip.rs(1 hunks)dash-spv/src/chain/chainlock_manager.rs(1 hunks)dash-spv/src/client/config.rs(5 hunks)dash-spv/src/client/message_handler.rs(14 hunks)dash-spv/src/error.rs(5 hunks)dash-spv/src/mempool_filter.rs(1 hunks)dash-spv/src/network/handshake.rs(11 hunks)dash-spv/src/network/mock.rs(1 hunks)dash-spv/src/network/mod.rs(4 hunks)dash-spv/src/network/multi_peer.rs(27 hunks)dash-spv/src/storage/sync_state.rs(1 hunks)dash-spv/src/storage/sync_storage.rs(1 hunks)dash-spv/src/sync/headers.rs(11 hunks)dash-spv/src/sync/masternodes.rs(8 hunks)dash-spv/src/sync/sequential/mod.rs(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (7)
- dash-spv/src/bloom/mod.rs
- dash-spv/src/storage/sync_storage.rs
- dash-spv/src/network/mod.rs
- dash-spv/Cargo.toml
- dash-spv/src/storage/sync_state.rs
- dash-spv-ffi/tests/unit/test_async_operations.rs
- dash-spv/src/network/mock.rs
🧰 Additional context used
🧬 Code Graph Analysis (2)
dash-spv/src/sync/sequential/mod.rs (7)
dash-spv/src/sync/sequential/phases.rs (2)
progress(194-359)name(117-127)dash-spv/src/sync/headers.rs (5)
new(32-42)start_sync(308-345)handle_headers2_message(167-195)handle_headers_message(46-163)reset_pending_requests(547-553)dash-spv/src/sync/sequential/progress.rs (3)
new(50-64)default(335-337)start_sync(67-69)dash-spv/src/sync/sequential/request_control.rs (1)
new(73-96)dash-spv/src/sync/headers_with_reorg.rs (6)
default(36-43)load_headers_from_storage(100-166)start_sync(490-504)handle_headers2_message(385-413)handle_headers_message(169-221)reset_pending_requests(653-660)dash-spv/src/sync/filters.rs (1)
active_request_count(2107-2109)dash-spv/src/types.rs (1)
tip_height(223-225)
dash-spv-ffi/src/types.rs (4)
dash-spv/src/sync/sequential/phases.rs (1)
progress(194-359)dash-spv/src/client/mod.rs (2)
start(306-433)new(214-303)dash-spv/src/types.rs (2)
new(182-184)new(965-985)dash-spv/src/mempool_filter.rs (2)
new(30-44)new(276-282)
🔇 Additional comments (33)
dash-spv/src/chain/chain_tip.rs (1)
75-105: LGTM: Atomicity issue has been properly addressed.The
extend_tipmethod now correctly handles the atomicity concern raised in previous reviews by storing the old tip before removal and restoring it if the new tip addition fails. This ensures no data loss occurs during the operation.dash-spv/src/bloom/utils.rs (2)
6-21: LGTM: Correct P2PKH script validation.The
extract_pubkey_hashfunction correctly validates the P2PKH script pattern and extracts the 20-byte public key hash. The magic number checks match the standard P2PKH opcodes.
23-29: LGTM: Proper outpoint serialization.The
outpoint_to_bytesfunction correctly serializes the outpoint by concatenating the transaction ID and the little-endian representation of the output index, which is the standard format for bloom filter inclusion.dash-spv/src/error.rs (3)
26-29: LGTM: General error variant provides useful fallback.The new
General(String)variant inSpvErrorprovides a useful catch-all for errors that don't fit other specific categories.
144-170: Well-designed error categorization with fallback logic.The
category()method provides useful error classification with intelligent fallback logic for legacySyncFailederrors. The heuristic substring matching is a pragmatic approach for backward compatibility.
187-212: Comprehensive test coverage for error categorization.The test thoroughly covers all error variants and validates both explicit categorization and fallback heuristics for legacy errors.
dash-spv/src/client/config.rs (3)
12-22: Well-designed mempool strategy enum.The
MempoolStrategyenum provides clear options with good documentation about the privacy/efficiency trade-offs of each approach.
365-377: Proper validation for mempool configuration.The validation logic correctly ensures that mempool-related parameters are properly configured when mempool tracking is enabled, including the strategy-specific requirement for
recent_send_window_secsin Selective mode.
312-342: Excellent fluent interface for mempool configuration.The builder methods provide a clean, fluent interface for configuring mempool options with sensible method names and chaining support.
dash-spv/src/bloom/builder.rs (2)
96-114: Well-designed wallet integration with proper sizing.The
from_walletmethod intelligently sizes the bloom filter based on wallet contents and applies a reasonable 2x multiplier for future growth. The async interface allows for non-blocking wallet data retrieval.
116-152: Robust bloom filter construction with comprehensive element handling.The
buildmethod properly handles all element types (addresses, outpoints, raw data) and includes the important optimization of adding both script pubkey and pubkey hash for P2PKH addresses to improve filter efficiency.dash-spv/src/mempool_filter.rs (1)
218-824: Excellent test coverage implementation.The test suite comprehensively covers the critical functionality mentioned in past reviews, including
is_transaction_relevantwith various watch item types,process_transactionwith different scenarios, and edge cases like capacity limits.dash-spv/src/network/handshake.rs (1)
253-256: Good integration of mempool strategy with relay flag.The conditional relay flag setting based on mempool strategy is well-implemented and follows the protocol requirements appropriately.
dash-spv/src/bloom/manager.rs (2)
14-42: Well-designed configuration structure.The
BloomFilterConfigprovides comprehensive configuration options with sensible defaults. The auto-recreation threshold and statistics tracking are particularly valuable features.
204-221: Good defensive programming in contains method.The fallback to
truewhen no filter exists is well-documented and follows the principle of not missing potentially relevant transactions.dash-spv/src/chain/chainlock_manager.rs (2)
213-237: Well-implemented chain lock violation detection.The
would_violate_chain_lockmethod properly checks for reorganization conflicts with chain locks and respects the enforcement flag for testing scenarios.
319-332: Comprehensive statistics implementation.The statistics gathering provides valuable insights into chain lock state and cache utilization, which will be helpful for monitoring and debugging.
dash-spv/src/bloom/stats.rs (1)
82-124: Efficient circular buffer implementation.Good use of
VecDequefor the query times circular buffer, providing O(1) operations for both ends.dash-spv/src/sync/masternodes.rs (2)
64-110: Well-structured terminal block validation.Good refactoring that extracts the validation logic into a reusable method and properly distinguishes between blocks with and without pre-calculated data in the log messages.
276-405: Excellent implementation of pre-calculated masternode data loading.The TODO for loading pre-calculated masternode data has been properly addressed. The implementation includes comprehensive parsing, validation, and error handling for all masternode entry fields.
dash-spv/src/client/message_handler.rs (3)
156-200: Comprehensive mempool transaction processing with proper filtering.The mempool transaction processing implementation is well-structured with proper filtering, state management, and event emission. The logic correctly handles the case when mempool tracking is disabled.
277-299: Mempool-aware transaction inventory handling with proper configuration checks.The transaction inventory handling correctly integrates with the mempool filter and respects configuration flags. The logging provides good visibility into the decision-making process.
72-126: ```shell
#!/bin/bashLocate message_handler.rs in the repository
echo "Searching for message_handler.rs..."
find . -type f -name 'message_handler.rs'If found, display the first 200 lines for context
FILE_PATH=$(find . -type f -name 'message_handler.rs' | head -n1)
if [ -n "$FILE_PATH" ]; then
echo "Found file at: $FILE_PATH"
sed -n '1,200p' "$FILE_PATH"
else
echo "message_handler.rs not found."
exit 1
fi</details> <details> <summary>dash-spv/src/sync/headers.rs (4)</summary> `165-195`: **Well-implemented Headers2 compression support with proper error handling.** The Headers2 message handling correctly decompresses headers, logs compression statistics, and processes them through the existing validation flow. The error handling and logging provide good visibility into compression performance. --- `434-446`: **Correctly fixes header validation chain continuity issue.** This change addresses the past review comment by ensuring existing headers are added to the validated vector, maintaining chain continuity for subsequent header validation. The fix is correct and necessary. --- `210-212`: **Aggressive timeout detection improves sync responsiveness.** Reducing the timeout from 10 seconds to 500 milliseconds when peers are connected makes timeout detection more responsive, which should improve sync performance and user experience. --- `546-564`: **Useful utility methods for sync state management.** The new utility methods provide clean interfaces for resetting sync state, accessing compression statistics, and managing peer-specific Headers2 state. These support the enhanced sync architecture well. </details> <details> <summary>dash-spv-ffi/src/client.rs (3)</summary> `106-191`: **Well-structured event listener with comprehensive event handling.** The event listener implementation properly handles all SPV event types with appropriate logging and callback invocation. The threading model safely isolates event processing from the main client thread. --- `1388-1418`: **Well-implemented mempool tracking enablement.** The mempool tracking function correctly converts the FFI strategy type and delegates to the underlying client. Error handling and logging are appropriate. --- `1565-1574`: **Proper mempool balance conversion with clear field mapping.** The conversion from `MempoolBalance` to `FFIBalance` correctly maps the fields and provides clear semantics for mempool-only balance queries. </details> <details> <summary>dash-spv/src/network/multi_peer.rs (3)</summary> `634-665`: **Excellent enhancement for compact filter peer selection** This logic properly ensures that filter-related messages are only sent to peers that support compact filters, improving protocol compliance and preventing failed requests. --- `291-298`: **Well-implemented Headers2 protocol support** The handling of SendHeaders2, Headers2, and GetHeaders2 messages is correctly implemented with appropriate state tracking and logging. The decision to ignore GetHeaders2 requests is correct for an SPV client. Also applies to: 356-370 --- `1090-1172`: **Valuable additions to the NetworkManager trait** These new methods provide essential functionality for peer capability detection and network state management. The implementations are comprehensive with proper logging and error handling. </details> </blockquote></details> </details> <!-- This is an auto-generated comment by CodeRabbit for review status -->
🤖 CodeRabbitAI Fixes SummaryI've automatically addressed CodeRabbitAI review comments from this PR. Here's what was done: 📊 Statistics
📝 CommitsThe following commits were created to address the issues: ✅ Verification
🔍 Review Instructions
This summary was generated automatically by |
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (10)
dash-spv/src/sync/headers_with_reorg.rs (1)
315-342: Critical: Chain reorganization is not implemented.The TODO comment indicates that the actual reorganization is skipped due to a borrow conflict. This means the core functionality of this module is not working.
Consider these solutions to resolve the borrow conflict:
- Collect all necessary data before the mutable operations
- Use interior mutability (Arc<RwLock<>> or similar) for shared state
- Refactor to separate read and write phases
// Example approach: collect data first let reorg_data = { let sync_storage = SyncStorageAdapter::new(storage); // Collect all needed data for reorganization self.reorg_manager.prepare_reorganization( &self.chain_state, &fork_clone, &sync_storage, )? }; // Then perform the reorganization with mutable access let event = self.reorg_manager.apply_reorganization( &mut self.chain_state, &mut self.wallet_state, reorg_data, storage, ).await?;dash-spv/src/network/connection.rs (1)
134-139: Make read timeout configurable in connect_instance.The hardcoded 15ms timeout is inconsistent with the configurable timeout in the
connectmethod and may be too aggressive.// Set a read timeout instead of non-blocking mode // This allows us to return None when no data is available -// Using 15ms timeout for optimal balance of performance and reliability -stream.set_read_timeout(Some(Duration::from_millis(15))).map_err(|e| { +// Use the configured timeout for consistency +let read_timeout = Duration::from_millis(100); // TODO: Get from config +stream.set_read_timeout(Some(read_timeout)).map_err(|e| { NetworkError::ConnectionFailed(format!("Failed to set read timeout: {}", e)) })?;dash-spv/src/network/reputation.rs (2)
165-168: Previous overflow issue has been properly addressed.The decay calculation now correctly handles potential overflow by clamping the intervals value and using saturating multiplication, which addresses the previous review comment about integer overflow.
418-483: Comprehensive validation addresses previous concerns.The
load_from_storagemethod now includes extensive validation for all loaded data fields, including score bounds checking, ban count limits, action count validation, and corruption detection. This properly addresses the previous review comment about missing validation.dash-spv/src/client/message_handler.rs (1)
72-110: Efficient message routing without unnecessary cloning.The refactored message handling logic elegantly separates sync-only messages (Headers2, MnListDiff, CFHeaders) that can be moved to the sync manager without cloning, addressing the previous performance concern about expensive message cloning.
dash-spv/src/network/multi_peer.rs (2)
82-88: Directory creation issue properly resolved.The code now properly ensures the directory exists before attempting to load reputation data, addressing the previous review comment about missing directory creation.
1188-1199: Recursive call issue has been resolved.The method now properly calls the instance method
get_last_message_peer().awaitinstead of recursively calling itself, fixing the previous infinite recursion bug.dash-spv-ffi/src/client.rs (1)
33-51: Thread safety concern with raw pointers in CallbackInfoWhile the callback registry improves over direct raw pointer passing, storing
*mut c_voiduser data inCallbackInfostill poses thread safety risks. The pointed-to data could be deallocated while the callback is registered, leading to use-after-free bugs.The
unsafe impl Send/Syncimplementations assume the callbacks and user data are thread-safe, but this isn't enforced.Consider documenting the lifetime requirements clearly or implementing a safer abstraction:
/// Information stored for each callback +/// +/// # Safety +/// The user_data pointer must remain valid for the entire lifetime of the callback registration. +/// Callers are responsible for ensuring the pointed-to data is not freed until after +/// unregistering the callback. enum CallbackInfo {dash-spv/src/client/mod.rs (2)
1783-1828: Add header chain validation during batch loading.Headers loaded from storage should be validated to ensure chain integrity before adding to the chain state. Currently headers are added without verification that they form a valid chain.
// Add validated headers to chain state let mut state = self.state.write().await; + let mut prev_hash = if current_height == 1 { + // For the first batch, get genesis hash from state + state.get_genesis_hash() + } else { + // Get the hash of the previous header + state.header_at_height(current_height - 1) + .map(|h| h.block_hash()) + }; + for header in headers { + // Validate that this header connects to the previous one + if let Some(expected_prev) = prev_hash { + if header.prev_blockhash != expected_prev { + tracing::error!( + "Header chain broken at height {}: expected prev_hash {}, got {}", + current_height, expected_prev, header.prev_blockhash + ); + return Ok(false); // Start fresh sync + } + } + state.add_header(header); loaded_count += 1; + prev_hash = Some(header.block_hash()); + current_height += 1; }
1951-2002: Update storage when rolling back chain state.The rollback method modifies in-memory chain state but doesn't update persistent storage for headers above the target height, which could lead to inconsistency between memory and disk state.
// Update persistent storage to reflect the rollback // Store the updated chain state self.storage.store_chain_state(&updated_state).await .map_err(|e| SpvError::Storage(e))?; + // Remove headers from storage above target height + if self.config.enable_persistence { + if let Err(e) = self.storage.rollback_headers_to_height(target_height).await { + tracing::error!("Failed to rollback headers in storage to height {}: {}", target_height, e); + // Continue anyway - the chain state is already updated in memory + } + + // Remove filter headers from storage above target height + if let Err(e) = self.storage.rollback_filter_headers_to_height(target_height).await { + tracing::error!("Failed to rollback filter headers in storage to height {}: {}", target_height, e); + } + } + // Clear any cached filter data above the target height // Note: Since we can't directly remove individual filters from storage, // the next sync will overwrite them as needed
🧹 Nitpick comments (8)
dash-spv/src/network/tests.rs (1)
24-24: Consider increasing the read timeout for tests.The 15ms read timeout might be too aggressive for test environments where there could be system load or CI environment variability. Consider using a more conservative value like 100ms or matching the production default.
- read_timeout: Duration::from_millis(15), + read_timeout: Duration::from_millis(100), // More reliable for test environmentsdash-spv/src/sync/masternodes.rs (1)
295-418: Consider adding summary metrics for skipped entries.While the error handling gracefully skips invalid entries, it would be helpful to log a summary of how many entries were successfully loaded vs skipped.
Add a counter for skipped entries and log a summary:
// Convert stored masternode entries to MasternodeListEntry let mut masternodes = BTreeMap::new(); +let mut skipped_count = 0; for stored_mn in &terminal_data.masternode_list { // Parse ProTxHash let pro_tx_hash_bytes = match hex::decode(&stored_mn.pro_tx_hash) { Ok(bytes) if bytes.len() == 32 => { let mut arr = [0u8; 32]; arr.copy_from_slice(&bytes); arr } _ => { tracing::warn!("Invalid ProTxHash for masternode: {}", stored_mn.pro_tx_hash); + skipped_count += 1; continue; } }; // ... rest of parsing ... } +if skipped_count > 0 { + tracing::warn!( + "Skipped {} invalid masternode entries out of {} total", + skipped_count, + terminal_data.masternode_list.len() + ); +}dash-spv/src/sync/headers_with_reorg.rs (1)
740-774: Monitor performance impact of block_in_place usage.The SyncStorageAdapter uses
block_in_placeto adapt async storage to sync trait requirements. While this works, it could impact tokio runtime performance by blocking worker threads.Consider monitoring the performance impact and potentially:
- Refactoring the ChainStorage trait to be async
- Using a dedicated thread pool for storage operations
- Caching frequently accessed headers to reduce blocking calls
dash-spv/src/client/message_handler.rs (1)
124-137: Optimize block message handling further.Consider caching the sync phase check result to avoid repeated calls to
is_in_downloading_blocks_phase()for each block message, as this method might involve internal state checks.+ // Cache the sync phase to avoid repeated checks + let is_downloading_blocks = self.sync_manager.is_in_downloading_blocks_phase(); + match &message { NetworkMessage::Block(_) => { // Blocks can be large - avoid cloning unless necessary // Check if sync manager actually needs to process this block - if self.sync_manager.is_in_downloading_blocks_phase() { + if is_downloading_blocks {dash-spv/src/network/multi_peer.rs (1)
851-870: Consider adding rate limiting for manual banning.The manual banning functionality is well-implemented, but consider adding rate limiting or administrative checks to prevent abuse if this method is exposed through any external interfaces.
pub async fn ban_peer(&self, addr: &SocketAddr, reason: &str) -> Result<(), Error> { + // TODO: Consider adding rate limiting or admin verification for manual bans log::info!("Manually banning peer {} - reason: {}", addr, reason);dash-spv/src/client/mod.rs (2)
922-999: Improve timeout handling for network message reception.The current timeout implementation may cause busy waiting when no messages are available. Consider using a longer timeout with proper backoff to reduce CPU usage while maintaining responsiveness.
// Handle network messages with timeout for responsiveness match tokio::time::timeout( - std::time::Duration::from_millis(1000), + std::time::Duration::from_millis(5000), // Longer timeout to reduce busy waiting self.network.receive_message() ).await { Ok(msg_result) => match msg_result { Ok(Some(message)) => { // Message handled successfully - reset backoff // [existing message handling code] } Ok(None) => { - // No message available, brief pause before continuing - tokio::time::sleep(std::time::Duration::from_millis(100)).await; + // No message available, shorter pause since we already waited + tokio::time::sleep(std::time::Duration::from_millis(10)).await; } Err(e) => { tracing::error!("Network error during monitoring: {}", e); - tokio::time::sleep(std::time::Duration::from_secs(5)).await; + tokio::time::sleep(std::time::Duration::from_secs(2)).await; // Shorter backoff for network errors } } Err(_) => { // Timeout occurred - this is expected and allows checking running state // Continue the loop to check if we should stop + // Brief pause to prevent tight loop on timeouts + tokio::time::sleep(std::time::Duration::from_millis(50)).await; }
529-540: Consider atomic update for mempool filter.The current implementation recreates the entire
MempoolFilterwhen watch items change, which could be inefficient for frequent updates and may cause race conditions if mempool operations are concurrent.Consider implementing an
update_watch_itemsmethod onMempoolFilterinstead of recreating the entire filter:async fn update_mempool_filter(&mut self) { - let watch_items = self.watch_items.read().await.iter().cloned().collect(); - self.mempool_filter = Some(Arc::new(MempoolFilter::new( - self.config.mempool_strategy, - Duration::from_secs(self.config.recent_send_window_secs), - self.config.max_mempool_transactions, - self.mempool_state.clone(), - watch_items, - ))); - tracing::info!("Updated mempool filter with current watch items"); + if let Some(ref filter) = self.mempool_filter { + let watch_items = self.watch_items.read().await.iter().cloned().collect(); + filter.update_watch_items(watch_items).await; + tracing::info!("Updated mempool filter with current watch items"); + } else { + // Create new filter if none exists + let watch_items = self.watch_items.read().await.iter().cloned().collect(); + self.mempool_filter = Some(Arc::new(MempoolFilter::new( + self.config.mempool_strategy, + Duration::from_secs(self.config.recent_send_window_secs), + self.config.max_mempool_transactions, + self.mempool_state.clone(), + watch_items, + ))); + } }dash-spv-ffi/include/dash_spv_ffi.h (1)
247-261: Inconsistent callback parameter naming.The sync functions have inconsistent callback parameter names and formatting. For better readability and maintainability, consider standardizing the parameter names and formatting.
int32_t dash_spv_ffi_client_sync_to_tip(struct FFIDashSpvClient *client, - void (*progress_callback)(double, const char*, void*), - void (*completion_callback)(bool, const char*, void*), + void (*progress_callback)(double percentage, const char* stage_message, void* user_data), + void (*completion_callback)(bool success, const char* error_message, void* user_data), void *user_data); int32_t dash_spv_ffi_client_sync_to_tip_with_progress(struct FFIDashSpvClient *client, - void (*progress_callback)(const struct FFIDetailedSyncProgress*, - void*), - void (*completion_callback)(bool, - const char*, - void*), + void (*progress_callback)(const struct FFIDetailedSyncProgress* progress, + void* user_data), + void (*completion_callback)(bool success, + const char* error_message, + void* user_data), void *user_data);
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (15)
dash-spv-ffi/include/dash_spv_ffi.h(9 hunks)dash-spv-ffi/src/client.rs(10 hunks)dash-spv-ffi/src/types.rs(7 hunks)dash-spv/src/chain/chain_work.rs(1 hunks)dash-spv/src/client/config.rs(8 hunks)dash-spv/src/client/message_handler.rs(14 hunks)dash-spv/src/client/mod.rs(35 hunks)dash-spv/src/mempool_filter.rs(1 hunks)dash-spv/src/network/connection.rs(15 hunks)dash-spv/src/network/multi_peer.rs(27 hunks)dash-spv/src/network/reputation.rs(1 hunks)dash-spv/src/network/tests.rs(2 hunks)dash-spv/src/storage/disk.rs(14 hunks)dash-spv/src/sync/headers_with_reorg.rs(1 hunks)dash-spv/src/sync/masternodes.rs(8 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
- dash-spv/src/chain/chain_work.rs
- dash-spv/src/mempool_filter.rs
- dash-spv/src/client/config.rs
🧰 Additional context used
🧬 Code Graph Analysis (3)
dash-spv/src/storage/disk.rs (3)
dash-spv/src/types.rs (3)
new(182-184)new(965-985)txid(988-990)dash-spv/src/storage/memory.rs (22)
new(39-54)store_sync_state(337-343)load_sync_state(345-354)clear_sync_state(356-361)store_sync_checkpoint(363-368)get_sync_checkpoints(370-388)store_chain_lock(390-395)load_chain_lock(397-406)get_chain_locks(408-426)store_instant_lock(428-433)load_instant_lock(435-444)store_terminal_block(446-449)load_terminal_block(451-453)get_all_terminal_blocks(455-459)has_terminal_block(461-463)store_mempool_transaction(466-469)remove_mempool_transaction(471-474)get_mempool_transaction(476-478)get_all_mempool_transactions(480-482)store_mempool_state(484-487)load_mempool_state(489-491)clear_mempool(493-497)dash-spv/src/storage/mod.rs (21)
store_sync_state(136-136)load_sync_state(139-139)clear_sync_state(142-142)store_sync_checkpoint(145-145)get_sync_checkpoints(148-148)store_chain_lock(151-151)load_chain_lock(154-154)get_chain_locks(157-157)store_instant_lock(160-160)load_instant_lock(163-163)store_terminal_block(166-166)load_terminal_block(169-169)get_all_terminal_blocks(172-172)has_terminal_block(175-175)store_mempool_transaction(179-179)remove_mempool_transaction(182-182)get_mempool_transaction(185-185)get_all_mempool_transactions(188-188)store_mempool_state(191-191)load_mempool_state(194-194)clear_mempool(197-197)
dash-spv-ffi/include/dash_spv_ffi.h (1)
swift-dash-core-sdk/Sources/DashSPVFFI/include/dash_spv_ffi.h (10)
height(106-106)txid(108-113)txid(117-121)txid(123-126)txid(128-128)confirmed(115-115)progress_callback(201-201)progress_callback(208-209)completion_callback(202-202)completion_callback(210-212)
dash-spv/src/client/mod.rs (11)
dash-spv/src/sync/filters.rs (3)
mpsc(2271-2271)new(139-171)is_filter_sync_available(184-186)dash-spv/src/client/status_display.rs (3)
stats(68-71)chain_state(74-77)sync_progress(40-65)dash-spv/src/mempool_filter.rs (3)
new(30-44)new(297-303)network(313-315)dash-spv/src/wallet/mod.rs (5)
new(77-85)new(110-119)default(103-105)network(122-125)rollback_to_height(709-754)dash-spv/src/sync/sequential/mod.rs (3)
new(67-88)start_sync(116-149)handle_inventory(1140-1233)dash-spv/src/client/config.rs (2)
new(220-226)default(165-215)dash-spv/src/client/message_handler.rs (4)
new(33-61)handle_inventory(262-357)process_new_block_hash(379-389)process_new_block(421-441)dash-spv/src/types.rs (9)
new(182-184)new(965-985)default(63-79)default(167-177)default(355-357)default(671-696)address(436-441)txid(988-990)tip_hash(228-230)dash-spv/src/client/block_processor.rs (1)
new(37-53)dash-spv/src/client/watch_manager.rs (1)
remove_watch_item(77-131)dash-spv/src/storage/sync_state.rs (1)
from_chain_state(148-194)
🔇 Additional comments (40)
dash-spv/src/network/tests.rs (1)
44-61: LGTM! Well-structured test configuration additions.The new mempool and request control fields are properly initialized with sensible defaults for testing.
dash-spv/src/sync/masternodes.rs (2)
464-464: LGTM! Appropriate error type for network-related failure.Using
SyncError::Networkis more semantically correct for a missing genesis hash.
681-694: LGTM! Proper terminal block validation integration.The validation ensures terminal blocks match expected hashes and appropriately fails the sync if validation fails.
dash-spv-ffi/src/types.rs (2)
92-167: LGTM! Well-designed FFI types for sync progress.The sync stage enum and detailed progress struct properly handle all fields with safe conversions and appropriate defaults.
373-482: Excellent safety documentation and memory management!The FFIUnconfirmedTransaction struct has comprehensive safety documentation and proper destroy functions for all allocated resources. The cascading destroy pattern ensures no memory leaks.
dash-spv/src/sync/headers_with_reorg.rs (1)
409-418: Headers2 support is implemented but disabled.The TODO indicates GetHeaders2 doesn't receive responses from peers. This should be investigated to enable the compression benefits.
Consider adding debug logging to diagnose why peers aren't responding to GetHeaders2:
- // TODO: Debug why GetHeaders2 doesn't receive response from peer - // For now, always use regular GetHeaders - if false && use_headers2 { + // Enable with debug logging to diagnose peer response issues + if use_headers2 { + tracing::debug!("Peer supports headers2, sending GetHeaders2 request");dash-spv/src/network/connection.rs (1)
175-260: Excellent peer info validation implementation!The validation comprehensively checks all fields with reasonable limits and provides detailed logging for debugging. The approach of being strict with critical fields while lenient with unknown service flags is appropriate.
dash-spv/src/network/reputation.rs (2)
1-7: Excellent module documentation.The module documentation clearly explains the purpose and key features of the reputation system, including automatic banning, decay for recovery, and tracking of both positive and negative behaviors.
15-79: Well-structured reputation scoring system.The misbehavior and positive score constants are well-organized into modules with descriptive names and reasonable penalty/reward values. The ban duration and decay parameters provide a good balance for network security.
dash-spv/src/client/message_handler.rs (2)
167-210: Comprehensive mempool transaction processing.The transaction handling logic properly integrates mempool filtering, wallet checking, storage persistence, and event emission. The conditional processing based on mempool tracking configuration is well-implemented.
288-309: Smart inventory handling with mempool awareness.The enhanced inventory processing intelligently filters transaction requests based on mempool configuration and filter decisions, preventing unnecessary network traffic while maintaining functionality.
dash-spv/src/network/multi_peer.rs (3)
157-175: Excellent reputation-aware connection logic.The connection logic now properly checks peer reputation before attempting connections and records connection attempts, providing a foundation for intelligent peer selection and avoiding problematic peers.
647-707: Smart peer selection with filter support.The enhanced peer selection logic intelligently chooses peers based on required capabilities (e.g., compact filter support) while maintaining sticky sync peer behavior for consistency. This is a significant improvement for protocol compatibility.
486-495: Positive reputation for connection longevity is excellent.Rewarding peers that maintain long connections encourages stable network participation and helps identify reliable peers. The 1-hour threshold and positive score adjustment are well-balanced.
dash-spv-ffi/src/client.rs (4)
165-250: Well-structured event listener implementationThe event listener properly handles all SPV events including the new mempool events. Good use of logging for debugging and appropriate thread management.
1492-1686: Well-implemented mempool tracking methodsThe mempool tracking methods are properly implemented with:
- Correct error handling and null checks
- Proper conversion between internal and FFI types
- Clear separation of confirmed vs mempool balances
1374-1410: Robust balance aggregation across watched addressesThe implementation properly:
- Handles failures for individual addresses without failing the entire operation
- Logs warnings for failed addresses
- Correctly sums confirmed and unconfirmed balances
984-988: Proper callback cleanup in destructorGood addition of callback cleanup before destroying the client. This prevents potential use-after-free issues.
dash-spv/src/storage/disk.rs (7)
391-409: Excellent sentinel header implementationThe use of max values for all fields creates clearly distinguishable sentinel headers that cannot be confused with real blocks. Combined with
valid_counttracking, this properly addresses the previous concern about header padding confusion.
900-912: Proper exclusion of sentinel headers from disk storageGood implementation - sentinel headers are correctly identified and skipped during disk writes, ensuring only actual blockchain headers are persisted.
1009-1074: Smart tiered logging based on batch sizeExcellent approach to logging that reduces noise for common operations while maintaining visibility for significant batch operations. The height range information in logs aids debugging.
1533-1804: Well-implemented persistent storage methodsThe implementation demonstrates best practices:
- Atomic writes using temp files prevent corruption
- Appropriate serialization formats (JSON for debugging, binary for performance)
- Consistent error handling and logging
- Proper directory structure organization
1811-1842: Comprehensive test coverage for sentinel header behaviorThe test properly validates that sentinel headers are excluded from query results, ensuring the padding mechanism is transparent to API users.
1844-1879: Good verification of sentinel header disk persistenceThe test confirms sentinel headers are not written to disk by:
- Storing headers and forcing a save
- Loading from a fresh storage instance
- Verifying only actual headers are returned
1771-1803: Appropriate in-memory storage for mempool dataThe mempool storage correctly uses in-memory storage only, which is suitable for transient mempool data. The implementation is thread-safe with proper mutex usage.
dash-spv/src/client/mod.rs (3)
474-521: LGTM: Correct mempool balance calculation logic.The mempool balance calculation now properly handles both incoming and outgoing transactions by checking the sign of
net_amountand applying appropriate arithmetic operations. This addresses the previous issue where absolute values were always added.
712-724: LGTM: Proper error recovery for partial sync initialization.Good implementation of error recovery that resets sync manager state when
send_initial_requestsfails afterstart_syncsucceeds. This prevents the sync manager from being left in a partially initialized state.
872-883: LGTM: Resolved static variable thread safety issue.The sync state saving now uses an instance field
last_sync_state_saveinstead of a static atomic variable, properly addressing the thread safety concern from previous reviews.dash-spv-ffi/include/dash_spv_ffi.h (12)
6-10: Well-designed mempool strategy enum.The
FFIMempoolStrategyenum provides clear options for different mempool tracking approaches. The naming is consistent and the values follow standard C enum conventions.
19-27: Comprehensive sync stage enumeration.The
FFISyncStageenum covers all major phases of the synchronization process with logical progression fromConnectingtoComplete/Failed. The naming is descriptive and follows consistent conventions.
52-63: Rich progress reporting structure.The
FFIDetailedSyncProgressstruct provides comprehensive synchronization progress information including performance metrics (headers_per_second), estimates (estimated_seconds_remaining), and detailed state (stage, connected_peers, etc.). This will enable rich progress reporting to clients.
73-73: Good addition of filter sync availability flag.Adding
filter_sync_availabletoFFISyncProgressprovides clients with important information about whether filter synchronization is possible, which helps with UI state management and user expectations.
98-99: Appropriate mempool balance fields.The addition of
mempoolandmempool_instantfields toFFIBalanceis consistent with the existing balance structure and provides the necessary granularity for mempool transaction tracking.
103-118: Excellent FFI safety documentation.The safety documentation for
FFIArrayis comprehensive and clearly explains the memory ownership transfer semantics. This level of documentation is crucial for FFI interfaces to prevent memory safety issues.
120-142: Well-structured callback function pointer types.The callback typedefs are well-designed with:
- Consistent parameter naming and ordering
- Appropriate use of
const char*for string parameters- Inclusion of
user_datafor context passing- Clear separation of concerns (block, transaction, balance, mempool events)
144-152: Improved callback structure design.The updated
FFIEventCallbacksstruct with explicit function pointers is much cleaner than a generic callback approach. This provides type safety and makes the API more self-documenting.
162-193: Outstanding safety documentation for complex struct.The
FFIUnconfirmedTransactionstruct has exemplary safety documentation that:
- Clearly explains memory ownership responsibilities
- Specifies lifetime requirements
- Documents proper cleanup procedures
- Provides specific function references for memory management
This is exactly the level of documentation needed for complex FFI structures involving raw pointers.
330-340: Comprehensive mempool API additions.The new mempool-related client functions provide good coverage of mempool functionality:
- Enable/configure mempool tracking
- Retrieve balances including mempool data
- Get transaction counts and record sends
- Address-specific mempool balances
The function naming follows established patterns and the signatures are consistent with existing API functions.
369-387: Thorough mempool configuration API.The configuration functions provide comprehensive control over mempool behavior including strategy selection, transaction limits, timeouts, and persistence options. The getter/setter pattern is consistent with the existing configuration API.
397-433: Exemplary memory management documentation.The memory management functions for
FFIUnconfirmedTransactionhave outstanding safety documentation that:
- Specifies exact safety requirements
- Explains the purpose of each function
- Documents proper usage patterns
- Warns about common pitfalls
This level of documentation should be the standard for all FFI memory management functions.
🤖 CodeRabbitAI Fixes SummaryI've automatically addressed CodeRabbitAI review comments from this PR. Here's what was done: 📊 Statistics
📝 CommitsThe following commits were created to address the issues: ✅ Verification
🔍 Review Instructions
This summary was generated automatically by |
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 4
♻️ Duplicate comments (7)
dash-spv-ffi/src/client.rs (2)
33-46: Consider documenting raw pointer lifetime requirements.While the callback registry improves lifetime management for callbacks, the raw
user_datapointers still require careful handling. The C side must ensure the user data remains valid for the entire duration of the callback registration.Consider adding documentation to the FFI header file about user data lifetime requirements.
602-603: Implement proper sync cancellation or return an error.The current implementation only stops the client without cancelling the ongoing sync operation, which could lead to unexpected behavior.
Consider either:
- Implementing proper cancellation using abort handles or cancellation tokens
- Returning an error code indicating cancellation is not fully supported:
-// TODO: Implement proper sync task cancellation using cancellation tokens or abort handles. -// Currently, this only stops the client, but the sync task may continue running in the background. +return FFIErrorCode::NotSupported as i32; // With appropriate error messagedash-spv/src/client/mod.rs (5)
872-883: LGTM: Static variable correctly replaced with instance field.The implementation properly addresses the previous review concern by using
self.last_sync_state_saveinstead of a static variable, ensuring thread safety and proper encapsulation across multiple client instances.
1994-2044: Storage consistency improved but consider transaction atomicity.The addition of
store_chain_state()call addresses the previous concern about storage consistency. However, consider wrapping the state modifications in a transaction-like pattern to ensure atomicity:+ // Begin transaction-like operation + let rollback_successful = { // Remove headers above target height from in-memory state let mut state = self.state.write().await; while state.tip_height() > target_height { state.remove_tip(); } // ... other state modifications ... // Clone the updated state for storage let updated_state = state.clone(); drop(state); // Update persistent storage to reflect the rollback self.storage.store_chain_state(&updated_state).await + }; + + rollback_successful.map_err(|e| SpvError::Storage(e))?;
1854-1870: LGTM: Header validation properly implemented.The addition of header validation before adding to chain state correctly addresses the previous review concern about ensuring chain integrity during restoration.
712-729: Error handling improved but could be more comprehensive.The addition of
reset_pending_requests()call on failure addresses the partial initialization concern from previous reviews. However, consider also resetting theinitial_sync_startedflag to allow retry:// Send initial requests after sync is prepared if let Err(e) = self.sync_manager.send_initial_requests(&mut *self.network, &mut *self.storage).await { tracing::error!("Failed to send initial sync requests: {}", e); // Reset sync manager state to prevent inconsistent state self.sync_manager.reset_pending_requests(); + initial_sync_started = false; // Allow retry on next iteration tracing::warn!("Reset sync manager state after send_initial_requests failure"); }
1705-1991: Comprehensive sync state restoration with proper validation.The method correctly implements bounds checking for recovery suggestions, addressing previous review concerns. However, consider breaking this large method into smaller, focused functions:
+ /// Validate and apply recovery suggestion from sync state validation. + async fn apply_recovery_suggestion(&mut self, suggestion: crate::storage::RecoverySuggestion) -> Result<bool> { + // Move the recovery logic here + } + + /// Load headers from storage into chain state. + async fn load_headers_into_chain_state(&mut self, target_height: u32) -> Result<()> { + // Move the header loading logic here + }This would improve maintainability and make the logic easier to test and debug.
🧹 Nitpick comments (2)
dash-spv-ffi/src/client.rs (1)
1405-1406: Document why pending values are set to 0.Consider adding a comment explaining why
pendingandpending_instantare hardcoded to 0:+// Note: pending and pending_instant are set to 0 as this method only aggregates +// confirmed and unconfirmed balances. Use get_balance_with_mempool for full balance. pending: dashcore::Amount::from_sat(0), pending_instant: dashcore::Amount::from_sat(0),dash-spv/src/client/mod.rs (1)
474-520: LGTM: Mempool balance calculation logic is now correct.The fix properly handles both incoming (positive) and outgoing (negative) transactions, addressing the previous issue where absolute values were always added. The use of saturating subtraction prevents underflows for outgoing transactions.
Minor optimization suggestion: Consider using a HashSet for address lookup instead of iterating through all addresses for each transaction:
+ // Pre-compute address set for efficient lookup + let target_addresses: std::collections::HashSet<_> = [address].iter().cloned().collect(); + // Calculate pending balances from mempool transactions for tx in mempool_state.transactions.values() { - // Check if this transaction affects the given address - let mut address_affected = false; - for addr in &tx.addresses { - if addr == address { - address_affected = true; - break; - } - } + // Check if this transaction affects the given address + let address_affected = tx.addresses.iter().any(|addr| target_addresses.contains(addr));
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
dash-spv-ffi/src/client.rs(10 hunks)dash-spv/src/client/mod.rs(35 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
dash-spv-ffi/src/client.rs (7)
dash-spv-ffi/src/error.rs (2)
set_last_error(25-30)from(49-59)dash-spv/src/mempool_filter.rs (1)
new(30-44)dash-spv/src/wallet/mod.rs (3)
new(77-85)new(110-119)total(88-90)dash-spv/src/client/mod.rs (1)
new(215-305)dash-spv-ffi/src/types.rs (13)
new(14-19)new(274-286)from(39-46)from(50-58)from(76-89)from(105-115)from(133-166)from(180-191)from(207-218)from(233-247)from(332-338)from(342-348)from(362-370)dash-spv/src/types.rs (5)
new(182-184)new(965-985)total(717-719)txid(988-990)address(436-441)dash-spv-ffi/include/dash_spv_ffi.h (18)
confirmed(129-129)txid(122-127)txid(131-135)txid(137-140)txid(142-142)height(120-120)progress_callback(248-248)progress_callback(255-256)completion_callback(249-249)completion_callback(257-259)dash_spv_ffi_client_sync_to_tip_with_progress(254-260)dash_spv_ffi_client_cancel_sync(262-262)dash_spv_ffi_client_is_filter_sync_available(268-268)dash_spv_ffi_client_enable_mempool_tracking(330-331)dash_spv_ffi_client_get_balance_with_mempool(333-333)dash_spv_ffi_client_get_mempool_transaction_count(335-335)dash_spv_ffi_client_record_send(337-337)dash_spv_ffi_client_get_mempool_balance(339-340)
🔇 Additional comments (4)
dash-spv-ffi/src/client.rs (2)
20-26: LGTM! Well-designed global callback registry.The use of
LazyandArc<Mutex<>>provides thread-safe initialization and access to the callback registry. The atomic counter ensures unique IDs without race conditions.
453-575: LGTM! Well-implemented async progress tracking.The implementation properly:
- Uses the callback registry for safe lifetime management
- Spawns separate threads for progress monitoring and sync execution
- Includes detailed safety comments
- Properly cleans up callbacks on completion
dash-spv/src/client/mod.rs (2)
1010-1044: LGTM: Clean message handling delegation with appropriate client-level processing.The delegation to
MessageHandlerprovides good separation of concerns while maintaining client-specific processing for ChainLock and InstantLock messages. This architectural pattern makes the code more maintainable and testable.
1536-1562: LGTM: Well-integrated mempool filter updates with watch item management.The conditional updating of mempool filters when watch items change is properly implemented. The checks for
enable_mempool_trackingensure the updates only occur when necessary, maintaining good performance characteristics.
🤖 CodeRabbitAI Fixes SummaryI've automatically addressed CodeRabbitAI review comments from this PR. Here's what was done: 📊 Statistics
📝 CommitsThe following commits were created to address the issues: ✅ Verification
🔍 Review Instructions
📋 Issues Fixed
This summary was generated automatically by |
3507135 to
721070f
Compare
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 10
🔭 Outside diff range comments (1)
dash-spv/src/network/connection.rs (1)
121-156: Make read timeout configurable in connect_instance method.The
connect_instancemethod still uses a hardcoded 15ms timeout (line 137), while the staticconnectmethod now accepts a configurableread_timeoutparameter. This inconsistency could lead to different timeout behaviors.Apply this diff to make the timeout configurable:
- pub async fn connect_instance(&mut self) -> NetworkResult<()> { + pub async fn connect_instance(&mut self, read_timeout: Duration) -> NetworkResult<()> { let stream = TcpStream::connect_timeout(&self.address, self.timeout).map_err(|e| { NetworkError::ConnectionFailed(format!("Failed to connect to {}: {}", self.address, e)) })?; // Don't set socket timeouts - we handle timeouts at the application level // and socket timeouts can interfere with async operations // Disable Nagle's algorithm for lower latency stream.set_nodelay(true).map_err(|e| { NetworkError::ConnectionFailed(format!("Failed to set TCP_NODELAY: {}", e)) })?; - // Set a read timeout instead of non-blocking mode - // This allows us to return None when no data is available - // Using 15ms timeout for optimal balance of performance and reliability - stream.set_read_timeout(Some(Duration::from_millis(15))).map_err(|e| { + // Set a read timeout instead of non-blocking mode + // This allows us to return None when no data is available + stream.set_read_timeout(Some(read_timeout)).map_err(|e| { NetworkError::ConnectionFailed(format!("Failed to set read timeout: {}", e)) })?;
♻️ Duplicate comments (11)
dash-spv/src/chain/chain_tip.rs (1)
75-105: Great fix for atomicity!The implementation now properly handles the atomic replacement of tips by storing the old tip temporarily and restoring it if
add_tipfails. This fully addresses the previous data loss concern.dash-spv/src/chain/chainlock_manager.rs (1)
145-161: Excellent optimization of cache eviction!The use of
IndexMapto maintain insertion order andshift_removefor O(1) removal of oldest entries is a significant improvement over the previous O(n log n) sorting approach. The batch collection of keys before removal also minimizes lock contention.dash-spv/src/sync/filters.rs (1)
673-710: Consistent error handling pattern applied to initial batch processing.This continues the robust error handling pattern established elsewhere in the file. The implementation is correct and consistent.
Consider the same refactoring suggestion as mentioned for lines 530-569 to extract this common pattern into a reusable helper method.
dash-spv/src/sync/headers_with_reorg.rs (1)
315-340: Critical: Chain reorganization is still not implemented.The reorganization functionality remains broken due to the borrow conflict between ChainStorage and StorageManager. This is a critical gap in the chain reorganization functionality.
dash-spv/src/storage/disk.rs (2)
83-83: Excellent fix: Addresses past review comment.The
valid_countfield directly addresses the previous review concern about header padding confusion by tracking the actual number of valid headers separately from the padded count.
391-409: Excellent sentinel header implementation.The sentinel header design uses distinctive invalid values (max version, 0xFF patterns, far future timestamps) that cannot be mistaken for valid blocks. The implementation properly:
- Tracks valid headers separately via
valid_count- Skips saving sentinel headers to disk
- Uses sentinels only for in-memory indexing consistency
This elegantly solves the header padding confusion issue.
Also applies to: 421-421, 901-908, 1032-1041, 1044-1045
dash-spv-ffi/src/client.rs (5)
20-72: Excellent thread-safe callback registry implementation!This properly addresses the previous concerns about raw pointer thread safety by using a global registry with unique IDs. The implementation ensures callbacks are safely managed across threads without raw pointer passing.
327-400: Good fix - removed misleading progress callback!The implementation correctly addresses the previous review comment by removing the unused progress callback parameter. The callback registry ensures safe lifetime management.
596-641: Good documentation of sync cancellation limitations!The implementation properly addresses the previous review comment by clearly documenting that sync cancellation only stops the client but doesn't fully abort the ongoing sync process. The TODO comment indicates this is a known limitation to be addressed in the future.
1003-1037: Excellent cleanup implementation with proper thread joining!This properly addresses the previous review comment by:
- Setting a shutdown signal for all threads
- Unregistering callbacks from the global registry
- Joining all active threads before destruction
- Handling thread join errors appropriately
1710-1718: Consider clarifying the mempool balance field semantics.The current implementation sets:
pending= mempool pending amountmempool= total of pending + instant amountsThis might be confusing as both fields contain overlapping data. Consider documenting the intended semantics or adjusting the field assignments for clarity.
Consider this alternative mapping for better clarity:
let balance = FFIBalance { confirmed: 0, // No confirmed balance in mempool - pending: mempool_balance.pending.to_sat(), + pending: 0, // Use mempool-specific fields instead instantlocked: 0, // No confirmed instantlocked in mempool - mempool: mempool_balance.pending.to_sat() + mempool_balance.pending_instant.to_sat(), + mempool: mempool_balance.pending.to_sat(), // Regular mempool transactions only mempool_instant: mempool_balance.pending_instant.to_sat(), total: mempool_balance.pending.to_sat() + mempool_balance.pending_instant.to_sat(), };
🧹 Nitpick comments (24)
dash-network-ffi/src/dash_network_ffi.swift (1)
318-319: Track: Cancellation support not yet implementedThe code indicates that cancellation is not supported yet and will cause a fatal error if encountered. Consider implementing proper cancellation support or documenting this limitation more prominently.
Would you like me to create an issue to track the implementation of cancellation support in the FFI layer?
dash-spv/CLAUDE.md (1)
104-104: Minor: Consider hyphenating compound adjective"on demand" could be written as "on-demand" when used as a compound adjective modifying "filters".
-- **Phase 4: Filters** - Download specific filters on demand +- **Phase 4: Filters** - Download specific filters on-demandCLAUDE.md (1)
185-185: Minor: Use en dash for version rangesConsider using an en dash (–) instead of a hyphen for the version range to follow proper typography conventions.
-- Support for Dash Core versions 0.18.0 - 0.21.0 +- Support for Dash Core versions 0.18.0–0.21.0dash-spv/docs/TERMINAL_BLOCKS.md (1)
28-37: Add language specifications to code blocksThe fenced code blocks should specify a language for proper syntax highlighting.
-``` +```text Request: Genesis (0) → Current (1,276,272) Diff size: ~500MB, covering 1.2M blocksWith terminal blocks:
-+text
Request: Terminal Block (900,000) → Current (1,276,272)
Diff size: ~100MB, covering 376K blocksdash-spv/scripts/fetch_terminal_blocks.py (4)
111-113: Consider preserving original working directoryThe script changes the working directory but doesn't restore it. This could cause issues if the script is imported or used as part of a larger process.
+ # Save original directory + original_dir = os.getcwd() # Change to dash-cli directory os.chdir(dash_cli_path) + + try: + # ... rest of the main function logic ... + finally: + # Restore original directory + os.chdir(original_dir)
39-39: Fix PEP 8 compliance: Add missing blank linesAdd the required blank lines between function definitions to comply with PEP 8.
} + def run_dash_cli(network, *args, parse_json=True):return None + def fetch_terminal_block_data(network, height, genesis_hash):} + def main():print(f"\n✓ Generated {rust_file}") + if __name__ == "__main__":Also applies to: 61-61, 98-98, 164-164
48-51: Simplify control flow by removing unnecessary elseThe else block is unnecessary after a return statement.
if parse_json: return json.loads(result.stdout) - else: - return result.stdout.strip() + return result.stdout.strip()
152-158: Remove unnecessary f-string prefixesThese strings don't contain any placeholders, so the f-prefix is not needed.
- f.write(f' // Terminal block {height}\n') - f.write(f' {{\n') - f.write(f' let data = include_str!("terminal_block_{height}.json");\n') - f.write(f' if let Ok(state) = serde_json::from_str::<TerminalBlockMasternodeState>(data) {{\n') - f.write(f' manager.add_state(state);\n') - f.write(f' }}\n') - f.write(f' }}\n\n') + f.write(f' // Terminal block {height}\n') + f.write(' {\n') + f.write(f' let data = include_str!("terminal_block_{height}.json");\n') + f.write(' if let Ok(state) = serde_json::from_str::<TerminalBlockMasternodeState>(data) {\n') + f.write(' manager.add_state(state);\n') + f.write(' }\n') + f.write(' }\n\n')dash-spv/src/sync/headers2_state.rs (1)
49-61: Consider encapsulating statistics fields.The statistics fields are currently public, which allows external modification and could lead to inconsistent state. Consider making these fields private and providing getter methods or using the existing
get_stats()method exclusively.#[derive(Debug, Default)] pub struct Headers2StateManager { /// Compression state per peer peer_states: HashMap<PeerId, CompressionState>, /// Statistics - pub total_headers_received: u64, - pub compressed_headers_received: u64, - pub bytes_saved: u64, - pub total_bytes_received: u64, + total_headers_received: u64, + compressed_headers_received: u64, + bytes_saved: u64, + total_bytes_received: u64, }dash-spv/src/chain/fork_detector.rs (1)
28-139: Consider refactoring this complex method for better maintainability.The
check_headermethod is over 100 lines with multiple nested conditions and duplicated genesis block handling logic. Consider extracting helper methods to improve readability and reduce duplication.Extract helper methods for each detection type:
fn extends_main_chain(&self, header: &BlockHeader, chain_state: &ChainState) -> bool { // Main chain extension logic } fn extends_known_fork(&mut self, header: &BlockHeader) -> Option<Fork> { // Fork extension logic } fn check_genesis_connection(&self, prev_hash: BlockHash, chain_state: &ChainState) -> bool { // Consolidated genesis checking logic }This would make the main method more readable:
pub fn check_header(&mut self, header: &BlockHeader, chain_state: &ChainState, storage: &dyn ChainStorage) -> ForkDetectionResult { if self.extends_main_chain(header, chain_state) { return ForkDetectionResult::ExtendsMainChain; } if let Some(fork) = self.extends_known_fork(header) { return ForkDetectionResult::ExtendsFork(fork); } // Continue with other checks... }dash-spv/src/network/handshake.rs (1)
24-42: Consider eliminating redundant state tracking.The handshake state is tracked using both the
HandshakeStateenum and boolean flags (version_received,verack_received,version_sent). This redundancy could lead to inconsistent state. Consider using only the enum for state tracking.The state enum already captures all necessary states. You could derive the boolean values from the current state:
impl HandshakeManager { fn version_sent(&self) -> bool { !matches!(self.state, HandshakeState::Init) } fn version_received(&self) -> bool { matches!(self.state, HandshakeState::VersionReceivedVerackSent | HandshakeState::Complete) } fn verack_received(&self) -> bool { matches!(self.state, HandshakeState::VerackReceived | HandshakeState::Complete) } }dash-spv/src/bloom/manager.rs (1)
100-106: Consider preserving the original error type.The current error conversion to string loses the original error type information, which could be useful for debugging and error handling upstream.
Consider defining a more specific error variant in
SpvErrorthat preserves the bloom filter error:-.map_err(|e| SpvError::General(format!("Failed to create bloom filter: {:?}", e)))?; +.map_err(|e| SpvError::BloomFilterCreation(e))?;This assumes adding a
BloomFilterCreationvariant toSpvErrorthat wraps the original error type.dash-spv/src/sync/mod.rs (1)
47-51: Consider documenting the default ReorgConfig behavior.The hardcoded
ReorgConfig::default()might not suit all legacy users. Consider adding a comment explaining what the default values are or provide a way to customize them.// Create reorg config with sensible defaults +// Default: max_reorg_depth=1000, enforce_chain_locks=true let reorg_config = ReorgConfig::default();dash-spv/src/chain/reorg.rs (1)
165-193: Add height tracking in find_common_ancestor for efficiency.The current implementation walks back through the chain without tracking how far it has gone. For deep reorgs, this could be inefficient and the safety check only catches the genesis case.
Consider adding a counter to limit the search depth:
fn find_common_ancestor( &self, _chain_state: &ChainState, fork_point: &BlockHash, storage: &dyn ChainStorage, ) -> Result<(BlockHash, u32), String> { // Start from the fork point and walk back until we find a block in our chain let mut current_hash = *fork_point; + let mut iterations = 0; + const MAX_ITERATIONS: u32 = 10000; // Reasonable limit loop { + iterations += 1; + if iterations > MAX_ITERATIONS { + return Err(format!("Exceeded maximum iterations ({}) while searching for common ancestor", MAX_ITERATIONS)); + } + if let Ok(Some(height)) = storage.get_header_height(¤t_hash) { // Found it in our chain return Ok((current_hash, height)); }dash-spv-ffi/src/types.rs (1)
92-167: Review the complex stage message generation logic.The
From<DetailedSyncProgress>implementation contains extensive pattern matching for stage messages. While functional, consider extracting the stage message generation into a separate method to improve readability.impl From<DetailedSyncProgress> for FFIDetailedSyncProgress { fn from(progress: DetailedSyncProgress) -> Self { use std::time::UNIX_EPOCH; + fn format_stage_message(stage: &SyncStage) -> String { + match stage { + SyncStage::Connecting => "Connecting to peers".to_string(), + SyncStage::QueryingPeerHeight => "Querying blockchain height".to_string(), + SyncStage::DownloadingHeaders { start, end } => + format!("Downloading headers {} to {}", start, end), + SyncStage::ValidatingHeaders { batch_size } => + format!("Validating {} headers", batch_size), + SyncStage::StoringHeaders { batch_size } => + format!("Storing {} headers", batch_size), + SyncStage::Complete => "Synchronization complete".to_string(), + SyncStage::Failed(err) => err.clone(), + } + } - let stage_message = match &progress.sync_stage { - // ... all the match arms - }; + let stage_message = format_stage_message(&progress.sync_stage); FFIDetailedSyncProgress { // ... rest of the fields } } }dash-spv/src/storage/mod.rs (1)
135-198: Consider splitting the StorageManager trait.The
StorageManagertrait has grown significantly with ~20 new methods. While all methods are related to storage, consider splitting into focused traits for better maintainability.Consider organizing into separate traits:
#[async_trait] pub trait CoreStorageManager: Send + Sync { // Current core methods (headers, filters, etc.) } #[async_trait] pub trait SyncStorageManager: Send + Sync { // Sync state, checkpoints, etc. async fn store_sync_state(&mut self, state: &PersistentSyncState) -> StorageResult<()>; // ... other sync methods } #[async_trait] pub trait MempoolStorageManager: Send + Sync { // Mempool-related methods async fn store_mempool_transaction(&mut self, txid: &Txid, tx: &UnconfirmedTransaction) -> StorageResult<()>; // ... other mempool methods } #[async_trait] pub trait ChainLockStorageManager: Send + Sync { // Chain lock and instant lock methods async fn store_chain_lock(&mut self, height: u32, chain_lock: &dashcore::ChainLock) -> StorageResult<()>; // ... other chain lock methods } // Main trait that combines all storage capabilities pub trait StorageManager: CoreStorageManager + SyncStorageManager + MempoolStorageManager + ChainLockStorageManager { fn as_any_mut(&mut self) -> &mut dyn Any; }This would improve code organization and make it easier to implement storage backends that only need subset of functionality.
dash-spv/src/sync/filters.rs (2)
530-569: Consider refactoring to reduce code duplication.This error handling logic is identical to lines 373-413 but applied during recovery. While the implementation is correct, consider extracting this pattern into a helper method to reduce duplication and improve maintainability.
Example refactor:
+ async fn get_header_with_fallback( + &self, + storage: &dyn StorageManager, + height: u32, + context: &str, + ) -> SyncResult<BlockHash> { + match storage.get_header(height).await { + Ok(Some(header)) => Ok(header.block_hash()), + Ok(None) if height > 0 => { + tracing::debug!("Tip header not found at height {} {}, trying previous header", height, context); + storage.get_header(height - 1).await + .map_err(|e| SyncError::SyncFailed(format!("Failed to get previous header {}: {}", context, e)))? + .ok_or_else(|| SyncError::SyncFailed(format!("Neither tip ({}) nor previous header found {}", height, context)))? + .block_hash() + } + // ... rest of error handling + } + }
1303-1312: Consider improving mutex lock handling.The periodic logging is useful for monitoring sync progress. However, the mutex handling could be more robust:
- if self.received_filter_heights.lock().unwrap().len() % 1000 == 0 { + if let Ok(heights) = self.received_filter_heights.lock() { + if heights.len() % 1000 == 0 { tracing::info!( "Filter sync state: {} filters received, {} active requests, {} pending requests", - self.received_filter_heights.lock().unwrap().len(), + heights.len(), self.active_filter_requests.len(), self.pending_filter_requests.len() ); + } + }This avoids potential panics from
unwrap()and eliminates the duplicate lock acquisition.dash-spv/src/network/connection.rs (1)
177-197: Consider relaxing protocol version validation constraints.The validation sets
MIN_PROTOCOL_VERSIONto 60001 andMAX_PROTOCOL_VERSIONto 100000. While validation is good, the maximum constraint might be too restrictive and could reject legitimate newer protocol versions from future Dash client updates.Consider using a warning instead of rejection for high protocol versions:
if version_msg.version > MAX_PROTOCOL_VERSION { - tracing::warn!( - "Peer {} reported suspiciously high protocol version {}, skipping update", - self.address, version_msg.version - ); - return; + tracing::warn!( + "Peer {} reported unusually high protocol version {}, proceeding with caution", + self.address, version_msg.version + ); + // Continue processing rather than rejecting }dash-spv/src/sync/headers.rs (1)
71-82: Consider reducing log verbosity for production.The enhanced logging includes multiple info-level logs for each header batch processing, which could become very verbose during initial sync when processing thousands of headers. Consider using debug level for detailed hash logging.
- // Log the first and last header received - tracing::info!( - "📥 Processing headers: first={} last={}", - headers[0].block_hash(), - headers.last().unwrap().block_hash() - ); + // Log the first and last header received + tracing::debug!( + "📥 Processing headers: first={} last={}", + headers[0].block_hash(), + headers.last().unwrap().block_hash() + ); - // Get the current tip before processing - let tip_before = storage.get_tip_height().await - .map_err(|e| SyncError::SyncFailed(format!("Failed to get tip height: {}", e)))?; - tracing::info!("📊 Current tip height before processing: {:?}", tip_before); + // Get the current tip before processing + let tip_before = storage.get_tip_height().await + .map_err(|e| SyncError::SyncFailed(format!("Failed to get tip height: {}", e)))?; + tracing::debug!("📊 Current tip height before processing: {:?}", tip_before);Also applies to: 136-148
dash-spv/src/network/reputation.rs (1)
19-49: Consider making misbehavior scores configurable.The misbehavior scores are currently hardcoded constants. For different network environments or testing scenarios, it might be beneficial to make these configurable through the client configuration.
Consider adding these scores to
ClientConfigto allow runtime configuration for different deployment scenarios.dash-spv/src/network/multi_peer.rs (2)
56-65: Add documentation for new struct fields.The newly added fields should be documented to clarify their purpose:
data_dir: Purpose and expected path formatmempool_strategy: How this affects peer behaviorlast_message_peer: What this tracks and whyread_timeout: Connection read timeout duration- /// Data directory for storage - data_dir: PathBuf, - /// Mempool strategy from config - mempool_strategy: MempoolStrategy, - /// Last peer that sent us a message - last_message_peer: Arc<Mutex<Option<SocketAddr>>>, - /// Read timeout for TCP connections - read_timeout: Duration, + /// Data directory for storing persistent data like peer reputation + data_dir: PathBuf, + /// Mempool strategy that determines how mempool data is requested from peers + mempool_strategy: MempoolStrategy, + /// Socket address of the last peer that sent us a message (used for tracking message sources) + last_message_peer: Arc<Mutex<Option<SocketAddr>>>, + /// Read timeout duration for TCP socket operations + read_timeout: Duration,
417-421: Consider more lenient handling of read timeouts.Network timeouts can occur for legitimate reasons (network congestion, routing issues) that are beyond the peer's control. The current approach might unfairly penalize good peers experiencing temporary network issues.
Consider implementing a more nuanced approach:
- // Minor reputation penalty for timeout - reputation_manager.update_reputation( - addr, - misbehavior_scores::TIMEOUT, - "Read timeout", - ).await; + // Only penalize after multiple consecutive timeouts + // Track timeout count per peer and only penalize if it exceeds threshold + log::debug!("Read timeout from {} - consider tracking consecutive timeouts", addr); + // TODO: Implement timeout tracking before applying reputation penaltydash-spv-ffi/include/dash_spv_ffi.h (1)
281-281: Add documentation for filter sync availability check.The function lacks documentation explaining its purpose and return value semantics.
Add documentation:
+/** + * Checks if filter synchronization is available for the client. + * + * # Safety + * - `client` must be a valid, non-null pointer + * + * # Returns + * Returns true if filter sync is available, false otherwise. + */ bool dash_spv_ffi_client_is_filter_sync_available(struct FFIDashSpvClient *client);
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (80)
.gitignore(1 hunks)CLAUDE.md(1 hunks)dash-network-ffi/src/dash_network_ffi.swift(1 hunks)dash-network-ffi/src/dash_network_ffiFFI.h(1 hunks)dash-network-ffi/src/dash_network_ffiFFI.modulemap(1 hunks)dash-spv-ffi/Cargo.toml(1 hunks)dash-spv-ffi/cbindgen.toml(1 hunks)dash-spv-ffi/include/dash_spv_ffi.h(9 hunks)dash-spv-ffi/src/callbacks.rs(2 hunks)dash-spv-ffi/src/client.rs(10 hunks)dash-spv-ffi/src/config.rs(2 hunks)dash-spv-ffi/src/error.rs(1 hunks)dash-spv-ffi/src/lib.rs(1 hunks)dash-spv-ffi/src/types.rs(7 hunks)dash-spv-ffi/src/wallet.rs(5 hunks)dash-spv-ffi/tests/test_client.rs(1 hunks)dash-spv-ffi/tests/test_event_callbacks.rs(1 hunks)dash-spv-ffi/tests/test_mempool_tracking.rs(1 hunks)dash-spv-ffi/tests/test_types.rs(1 hunks)dash-spv-ffi/tests/unit/test_async_operations.rs(7 hunks)dash-spv-ffi/tests/unit/test_client_lifecycle.rs(2 hunks)dash-spv-ffi/tests/unit/test_type_conversions.rs(1 hunks)dash-spv/CLAUDE.md(4 hunks)dash-spv/Cargo.toml(2 hunks)dash-spv/README.md(2 hunks)dash-spv/data/mainnet/mod.rs(1 hunks)dash-spv/data/testnet/mod.rs(1 hunks)dash-spv/docs/TERMINAL_BLOCKS.md(1 hunks)dash-spv/docs/utxo_rollback.md(1 hunks)dash-spv/examples/test_genesis.rs(1 hunks)dash-spv/examples/test_terminal_blocks.rs(1 hunks)dash-spv/scripts/fetch_terminal_blocks.py(1 hunks)dash-spv/src/bloom/builder.rs(1 hunks)dash-spv/src/bloom/manager.rs(1 hunks)dash-spv/src/bloom/mod.rs(1 hunks)dash-spv/src/bloom/stats.rs(1 hunks)dash-spv/src/bloom/utils.rs(1 hunks)dash-spv/src/chain/chain_tip.rs(1 hunks)dash-spv/src/chain/chain_work.rs(1 hunks)dash-spv/src/chain/chainlock_manager.rs(1 hunks)dash-spv/src/chain/chainlock_test.rs(1 hunks)dash-spv/src/chain/checkpoints.rs(1 hunks)dash-spv/src/chain/fork_detector.rs(1 hunks)dash-spv/src/chain/mod.rs(1 hunks)dash-spv/src/chain/orphan_pool.rs(1 hunks)dash-spv/src/chain/reorg.rs(1 hunks)dash-spv/src/client/block_processor.rs(8 hunks)dash-spv/src/client/config.rs(8 hunks)dash-spv/src/client/consistency.rs(2 hunks)dash-spv/src/client/filter_sync.rs(3 hunks)dash-spv/src/client/message_handler.rs(14 hunks)dash-spv/src/client/mod.rs(35 hunks)dash-spv/src/client/status_display.rs(1 hunks)dash-spv/src/client/watch_manager.rs(0 hunks)dash-spv/src/error.rs(5 hunks)dash-spv/src/lib.rs(1 hunks)dash-spv/src/main.rs(2 hunks)dash-spv/src/mempool_filter.rs(1 hunks)dash-spv/src/network/connection.rs(15 hunks)dash-spv/src/network/constants.rs(1 hunks)dash-spv/src/network/handshake.rs(11 hunks)dash-spv/src/network/message_handler.rs(5 hunks)dash-spv/src/network/mock.rs(1 hunks)dash-spv/src/network/mod.rs(4 hunks)dash-spv/src/network/multi_peer.rs(27 hunks)dash-spv/src/network/reputation.rs(1 hunks)dash-spv/src/network/reputation_tests.rs(1 hunks)dash-spv/src/network/tests.rs(2 hunks)dash-spv/src/storage/disk.rs(14 hunks)dash-spv/src/storage/memory.rs(6 hunks)dash-spv/src/storage/mod.rs(2 hunks)dash-spv/src/storage/sync_state.rs(1 hunks)dash-spv/src/storage/sync_storage.rs(1 hunks)dash-spv/src/storage/types.rs(1 hunks)dash-spv/src/sync/filters.rs(15 hunks)dash-spv/src/sync/headers.rs(11 hunks)dash-spv/src/sync/headers2_state.rs(1 hunks)dash-spv/src/sync/headers_with_reorg.rs(1 hunks)dash-spv/src/sync/masternodes.rs(8 hunks)dash-spv/src/sync/mod.rs(9 hunks)
💤 Files with no reviewable changes (1)
- dash-spv/src/client/watch_manager.rs
🚧 Files skipped from review as they are similar to previous changes (50)
- dash-spv-ffi/src/lib.rs
- dash-spv-ffi/tests/unit/test_type_conversions.rs
- dash-spv-ffi/tests/test_types.rs
- .gitignore
- dash-spv-ffi/Cargo.toml
- dash-spv-ffi/src/error.rs
- dash-spv-ffi/cbindgen.toml
- dash-spv/src/lib.rs
- dash-spv/Cargo.toml
- dash-spv/src/main.rs
- dash-spv/src/client/consistency.rs
- dash-network-ffi/src/dash_network_ffiFFI.modulemap
- dash-spv/src/client/status_display.rs
- dash-spv/src/client/filter_sync.rs
- dash-spv-ffi/tests/unit/test_client_lifecycle.rs
- dash-spv/data/testnet/mod.rs
- dash-spv/examples/test_terminal_blocks.rs
- dash-spv/src/network/tests.rs
- dash-spv/src/storage/types.rs
- dash-spv/examples/test_genesis.rs
- dash-spv/README.md
- dash-spv/src/bloom/utils.rs
- dash-spv/src/network/constants.rs
- dash-spv-ffi/tests/test_client.rs
- dash-spv/src/bloom/mod.rs
- dash-spv/src/chain/chainlock_test.rs
- dash-spv/src/network/reputation_tests.rs
- dash-spv-ffi/tests/unit/test_async_operations.rs
- dash-spv-ffi/tests/test_mempool_tracking.rs
- dash-spv/src/client/block_processor.rs
- dash-spv-ffi/tests/test_event_callbacks.rs
- dash-spv/src/chain/mod.rs
- dash-spv/src/storage/sync_storage.rs
- dash-spv/src/chain/chain_work.rs
- dash-spv/src/network/mock.rs
- dash-spv-ffi/src/wallet.rs
- dash-spv/data/mainnet/mod.rs
- dash-spv/src/network/message_handler.rs
- dash-spv/src/chain/orphan_pool.rs
- dash-spv/src/error.rs
- dash-spv-ffi/src/config.rs
- dash-spv/src/mempool_filter.rs
- dash-spv/src/bloom/builder.rs
- dash-network-ffi/src/dash_network_ffiFFI.h
- dash-spv/src/storage/sync_state.rs
- dash-spv/src/bloom/stats.rs
- dash-spv/src/network/mod.rs
- dash-spv/src/chain/checkpoints.rs
- dash-spv/src/client/message_handler.rs
- dash-spv/src/storage/memory.rs
🧰 Additional context used
🧬 Code Graph Analysis (2)
dash-spv-ffi/src/client.rs (6)
dash-spv-ffi/src/error.rs (2)
set_last_error(25-30)from(49-59)dash-spv-ffi/src/types.rs (13)
new(14-19)new(274-286)from(39-46)from(50-58)from(76-89)from(105-115)from(133-166)from(180-191)from(207-218)from(233-247)from(332-338)from(342-348)from(362-370)dash-spv/src/wallet/mod.rs (3)
new(77-85)new(110-119)total(88-90)dash-spv/src/client/mod.rs (1)
new(216-306)dash-spv-ffi/include/dash_spv_ffi.h (6)
confirmed(129-129)txid(122-127)txid(131-135)txid(137-140)txid(142-142)height(120-120)dash-spv-ffi/tests/unit/test_async_operations.rs (1)
on_block(311-315)
dash-spv/src/client/mod.rs (12)
dash-spv/src/sync/filters.rs (3)
mpsc(2271-2271)new(139-171)is_filter_sync_available(184-186)dash-spv/src/sync/sequential/phases.rs (1)
progress(194-359)dash-spv/src/client/status_display.rs (3)
stats(68-71)chain_state(74-77)sync_progress(40-65)dash-spv/src/mempool_filter.rs (3)
new(30-44)new(297-303)network(313-315)dash-spv/src/types.rs (9)
new(182-184)new(965-985)default(63-79)default(167-177)default(355-357)default(671-696)address(436-441)txid(988-990)tip_hash(228-230)dash-spv/src/sync/sequential/mod.rs (3)
new(67-88)start_sync(116-149)handle_inventory(1140-1233)dash-spv/src/wallet/mod.rs (5)
new(77-85)new(110-119)default(103-105)network(122-125)rollback_to_height(709-754)dash-spv/src/client/message_handler.rs (4)
new(33-61)handle_inventory(262-357)process_new_block_hash(379-389)process_new_block(421-441)dash-spv/src/client/block_processor.rs (1)
new(37-53)dash-spv/src/client/config.rs (1)
default(165-215)dash-spv/src/client/watch_manager.rs (1)
remove_watch_item(77-131)dash-spv/src/storage/sync_state.rs (1)
from_chain_state(148-194)
🪛 LanguageTool
dash-spv/CLAUDE.md
[uncategorized] ~104-~104: If this is a compound adjective that modifies the following noun, use a hyphen.
Context: ...: Filters** - Download specific filters on demand - Phase 5: Blocks - Download blocks...
(EN_COMPOUND_ADJECTIVE_INTERNAL)
CLAUDE.md
[typographical] ~185-~185: If you want to indicate numerical ranges or time ranges, consider using an en dash.
Context: ... - Support for Dash Core versions 0.18.0 - 0.21.0 ## Security Considerations - Th...
(DASH_RULE)
🪛 markdownlint-cli2 (0.17.2)
dash-spv/docs/TERMINAL_BLOCKS.md
28-28: Fenced code blocks should have a language specified
null
(MD040, fenced-code-language)
34-34: Fenced code blocks should have a language specified
null
(MD040, fenced-code-language)
🪛 Ruff (0.11.9)
dash-spv/scripts/fetch_terminal_blocks.py
153-153: f-string without any placeholders
Remove extraneous f prefix
(F541)
155-155: f-string without any placeholders
Remove extraneous f prefix
(F541)
156-156: f-string without any placeholders
Remove extraneous f prefix
(F541)
157-157: f-string without any placeholders
Remove extraneous f prefix
(F541)
158-158: f-string without any placeholders
Remove extraneous f prefix
(F541)
🪛 Flake8 (7.2.0)
dash-spv/scripts/fetch_terminal_blocks.py
[error] 39-39: expected 2 blank lines, found 1
(E302)
[error] 61-61: expected 2 blank lines, found 1
(E302)
[error] 98-98: expected 2 blank lines, found 1
(E302)
[error] 153-153: f-string is missing placeholders
(F541)
[error] 155-155: f-string is missing placeholders
(F541)
[error] 156-156: f-string is missing placeholders
(F541)
[error] 157-157: f-string is missing placeholders
(F541)
[error] 158-158: f-string is missing placeholders
(F541)
[error] 164-164: expected 2 blank lines after class or function definition, found 1
(E305)
🪛 Pylint (3.3.7)
dash-spv/scripts/fetch_terminal_blocks.py
[refactor] 48-51: Unnecessary "else" after "return", remove the "else" and de-indent the code inside it
(R1705)
🔇 Additional comments (84)
dash-network-ffi/src/dash_network_ffi.swift (1)
775-778: Note: Error messages are ignored during serializationThe FfiConverter for NetworkError ignores the associated message values when writing. This means error messages are not preserved when passing errors back to Rust. If bidirectional error propagation with messages is needed, this implementation should be enhanced.
dash-spv/src/sync/headers2_state.rs (2)
25-48: Well-structured error handling implementation.The
ProcessErrorenum provides clear error variants with descriptive messages. The Display and Error trait implementations follow Rust best practices.
172-263: Comprehensive test coverage.The tests thoroughly cover the main functionality including edge cases for first header validation, peer state management, and statistics calculation.
dash-spv/docs/utxo_rollback.md (1)
1-200: Excellent documentation for the UTXO rollback mechanism.The documentation is comprehensive, well-structured, and includes practical examples. It covers all important aspects including usage, configuration, testing, and security considerations.
dash-spv/src/chain/fork_detector.rs (1)
208-288: Good test coverage with room for expansion.The tests cover the basic fork detection scenarios well. Consider adding tests for additional edge cases such as:
- Multiple competing forks
- Fork reorganization scenarios
- Maximum fork depth handling
dash-spv/src/network/handshake.rs (2)
279-294: Headers2 negotiation properly extracted.Good job extracting the headers2 negotiation logic into a separate method as suggested in the previous review. This eliminates code duplication and improves maintainability.
253-256: Correct mempool strategy integration.The relay flag is properly set based on the mempool strategy, ensuring that transaction relay is only enabled when using the FetchAll strategy.
dash-spv/src/client/config.rs (2)
12-21: Well-designed mempool strategy enum.The
MempoolStrategyenum clearly documents the privacy and bandwidth tradeoffs of each strategy, helping users make informed choices.
375-386: Comprehensive mempool configuration validation.The validation ensures all mempool-related parameters are properly configured when tracking is enabled, including special handling for the Selective strategy.
dash-spv/src/chain/chain_tip.rs (1)
147-167: Well-implemented active tip selection.The method correctly ensures only one tip is marked as active by clearing all flags first, then setting the flag on the tip with the most chain work. The two-phase approach prevents any race conditions.
dash-spv/src/bloom/manager.rs (1)
59-84: Good async concurrency design.The use of
tokio::sync::RwLockwithArcfor shared state is appropriate for the async bloom filter operations. Lock scopes are kept minimal, which helps prevent contention.dash-spv/src/sync/mod.rs (1)
30-33: Clear deprecation notice with migration path.Good practice marking the legacy sync manager as deprecated and directing users to
SequentialSyncManager. This helps with the transition to the new sequential sync architecture.dash-spv/src/chain/chainlock_manager.rs (1)
53-122: Robust chain lock processing with comprehensive validation.The method properly handles all validation scenarios:
- Conflict detection with existing locks
- Block verification against local chain
- Graceful handling of future blocks
- Proper error propagation
The informational logging throughout helps with debugging.
dash-spv/src/chain/reorg.rs (2)
247-300: Well-designed reorganization with proper rollback support.The implementation elegantly handles both UTXO rollback-capable wallets and simpler wallets that only track confirmation status. The sequential processing of disconnections followed by connections ensures consistency.
83-106: Robust chain lock enforcement with fallback logic.Good design providing two paths for chain lock validation:
- Using
ChainLockManagerwhen available (preferred)- Falling back to direct storage queries
This ensures chain locks are respected even in degraded scenarios.
dash-spv-ffi/src/types.rs (4)
2-3: LGTM: New imports for extended FFI functionality.The additional imports support the new mempool tracking and detailed sync progress features.
70-70: LGTM: Added filter sync availability tracking.The new
filter_sync_availablefield properly extends the sync progress reporting.Also applies to: 85-85
323-371: LGTM: Well-structured enum conversions.The
FFIMempoolStrategyandFFIMempoolRemovalReasonenums with their bidirectional conversions are properly implemented and handle all variants correctly.
373-482: Excellent memory management documentation and implementation.The
FFIUnconfirmedTransactionstruct includes comprehensive safety documentation and properly implemented destruction functions. The documentation clearly explains ownership responsibilities and provides multiple destruction functions for different use cases.dash-spv-ffi/src/callbacks.rs (2)
82-91: LGTM: Proper optional callback type definitions.The transition to
Option-wrapped callback types allows for optional functionality while maintaining type safety. The new mempool callback signatures are well-designed with appropriate parameters.
133-145: LGTM: Enhanced callback methods with proper logging.The updated callback methods include comprehensive logging and proper string conversions. The address concatenation using
join(",")is appropriate for the FFI boundary.Also applies to: 157-194
dash-spv/src/sync/masternodes.rs (4)
38-63: LGTM: Proper terminal block manager integration.The addition of the
terminal_block_managerfield and its initialization in the constructor is well-implemented.
65-111: LGTM: Well-structured terminal block validation.The
validate_terminal_blockmethod provides clear validation logic with appropriate error handling and logging that distinguishes between different validation scenarios.
681-694: LGTM: Proper terminal block validation integration.The terminal block validation in the diff processing logic is well-integrated and provides appropriate error handling.
735-753: LGTM: Clean public API for terminal block access.The new public methods for accessing the terminal block manager and retrieving the next terminal block provide a clean interface for external components.
dash-spv/src/storage/mod.rs (2)
5-7: LGTM: New module additions.The addition of
sync_stateandsync_storagemodules with proper public exports extends the storage functionality appropriately.Also applies to: 22-24
29-48: LGTM: Well-designed synchronous storage trait.The
ChainStoragetrait provides a clean synchronous interface for chain operations, which is useful for components that need immediate access to chain data without async overhead.dash-spv/src/sync/filters.rs (11)
74-74: LGTM: Field visibility change supports architectural requirements.Making the
syncing_filtersfield public enables the sequential sync manager to coordinate filter sync state, which aligns with the new architecture described in the AI summary.
172-186: LGTM: Flow control APIs enhance peer management.These methods provide essential flow control capabilities:
- Enable/disable flow control for adapting to network conditions
- Check peer support for compact filters before initiating sync
The implementation correctly checks for the
COMPACT_FILTERSservice flag to ensure peers can handle filter requests.
373-413: LGTM: Enhanced error handling improves robustness.The fallback logic when tip headers are not found is well-implemented:
- Attempts to use the previous header when at chain tip
- Provides detailed logging for debugging
- Proper error propagation with descriptive messages
This handles edge cases where headers might be temporarily unavailable during sync.
597-602: Excellent defensive programming practice.Adding the peer capability check before starting sync prevents:
- Unnecessary sync attempts when no peers support compact filters
- Provides clear user feedback about missing peer capabilities
- Graceful degradation instead of failures
The warning messages are informative and guide users toward the solution.
1088-1090: Good state management practice.Clearing stale state when starting a new sync session prevents interference from previous incomplete attempts. This ensures clean initialization and more reliable sync behavior.
1105-1107: Helpful clarification of flag lifecycle.These comments clarify the intended behavior of the
syncing_filtersflag, preventing potential misuse and documenting the expected state management lifecycle.
2105-2145: Well-designed flow control APIs.These methods provide essential external access to filter sync state and operations:
- Clear naming and single responsibilities
- Proper error handling in
send_next_filter_batch- Good logging for debugging sync progress
These APIs enable the sequential sync manager to coordinate filter operations effectively.
2219-2235: Well-designed state clearing method.This method correctly distinguishes between state that should be cleared (tracking/requests) versus preserved (actual received filters). The comment explaining this logic is particularly helpful for maintainability.
2249-2256: Proper error handling in getter method.This method correctly handles potential mutex lock failures by returning a default value instead of panicking, which is appropriate for a getter method that might be called frequently.
1448-1449: Standard practice for unused parameters.Adding underscore prefixes to unused parameters is the idiomatic Rust way to satisfy compiler warnings while maintaining interface consistency.
3122-3136: Comprehensive state reset method.This method provides thorough cleanup of all sync-related state:
- Resets sync flags appropriately
- Clears all tracking collections
- Updates progress timestamps
- Good logging for debugging
This is essential for clean recovery and restart scenarios.
dash-spv/src/network/connection.rs (1)
165-195: Verify Headers2 decompression integration.The
handle_headers2_messagemethod processes compressed headers but doesn't validate that the peer is authorized to send Headers2 messages or that the Headers2 handshake was completed successfully.Run the following script to verify Headers2 authorization checks exist elsewhere:
#!/bin/bash # Check for Headers2 authorization logic in handshake or connection management rg -A 5 -B 5 "headers2.*handshake|handshake.*headers2|sendheaders2|can_request_headers2"dash-spv/src/sync/headers.rs (2)
211-211: Verify 500ms timeout isn't too aggressive.The timeout duration has been reduced from 10 seconds to 500 milliseconds when peers are connected. This very aggressive timeout might cause unnecessary re-requests on slower networks or during high load periods.
This timeout change could cause issues on slower networks. Consider monitoring if this leads to excessive timeout recoveries in production environments.
588-659: Approve comprehensive test coverage.The new test cases effectively verify that the
validate_headersmethod correctly handles existing headers in batches, ensuring chain continuity is maintained. This addresses the previous review concern about skipping headers.dash-spv/src/network/reputation.rs (3)
158-178: Approve integer overflow protection in decay calculation.The decay calculation now properly handles potential integer overflow by using
saturating_muland clamping the intervals to prevent overflow. This addresses the previous review concern effectively.
404-483: Approve comprehensive validation in load_from_storage.The loading function now includes extensive validation for all reputation data fields, with proper clamping of invalid values and detailed logging of any issues. This robust approach prevents corrupted data from affecting the reputation system.
486-531: Approve well-designed ReputationAware trait.The trait provides a clean interface for reputation-based peer selection and connection decisions. The implementation properly sorts peers by reputation score and filters out banned peers.
dash-spv/src/network/multi_peer.rs (2)
305-312: LGTM! Comprehensive handling of Headers2 messages.The implementation correctly:
- Sets the peer's Headers2 support flag when receiving SendHeaders2
- Provides detailed logging for debugging compressed headers
- Properly ignores header requests as an SPV client
- Handles unknown messages with appropriate warning level
Also applies to: 365-390
1105-1199: Well-implemented peer capability checking methods.The new NetworkManager trait methods provide comprehensive functionality for:
- Querying peer heights with detailed debug logging
- Checking for specific service flags across all peers
- Detecting Headers2 support for compressed headers
- Retrieving the last message peer ID
The implementation is thorough with appropriate logging levels.
dash-spv/src/sync/headers_with_reorg.rs (2)
730-854: Well-implemented async-to-sync storage adapter.The
SyncStorageAdaptercorrectly bridges asyncStorageManagerwith syncChainStorageinterface usingtokio::task::block_in_place. The comprehensive trace logging aids debugging.Note: The
store_headerlimitation (lines 831-843) is acceptable since headers are stored through the asyncStorageManagerin production code.
1-899: Comprehensive header sync implementation with fork detection.The module provides a well-structured implementation of header synchronization with:
- ✅ Batch loading for efficient storage operations
- ✅ Fork detection and tracking
- ✅ Checkpoint validation
- ✅ Headers2 (compressed headers) support
- ✅ Proper timeout handling and recovery
⚠️ Reorganization logic (currently broken due to borrow conflict)The code is well-documented and includes appropriate logging. Once the reorganization issue is resolved, this will provide robust header synchronization.
dash-spv/src/storage/disk.rs (6)
18-18: LGTM: Import additions are appropriate.The new imports support the expanded storage functionality for mempool tracking and terminal blocks, and follow existing patterns.
Also applies to: 23-24
125-126: LGTM: Mempool storage fields appropriately added.In-memory storage for mempool data is appropriate since it's more ephemeral than block data. The
Arc<RwLock<>>pattern maintains consistency with existing fields.Also applies to: 245-246
1010-1074: Well-designed logging enhancement.The tiered logging approach (trace/debug/info based on batch size) prevents spam during large syncs while maintaining useful diagnostics. Height tracking aids debugging.
1114-1119: Proper sentinel header filtering in retrieval.The retrieval methods correctly use
valid_countto ensure sentinel headers are never returned. The bounds checking and debug logging provide good diagnostics while maintaining data integrity.Also applies to: 1127-1164
1533-1803: Comprehensive persistent storage implementation.The new storage methods demonstrate excellent design patterns:
- Atomic writes: Uses temporary files to ensure data integrity
- Appropriate serialization: JSON for debugging (sync state), bincode for efficiency (locks, blocks)
- Consistent error handling: Proper error mapping and logging
- Directory management: Creates directories as needed
The methods integrate well with the existing storage interface and provide the persistence needed for the enhanced SPV functionality.
1806-1879: Comprehensive test coverage for sentinel header behavior.The tests effectively validate:
- Sentinel headers are not returned during retrieval
- Sentinel headers are not persisted to disk
- The
valid_countmechanism works correctlyThis provides strong validation of the header padding fix.
dash-spv-ffi/src/client.rs (3)
166-268: Well-implemented event listener with proper shutdown handling!The event listener correctly:
- Uses timeout-based polling to check the shutdown signal
- Handles all event types including mempool events
- Stores thread handles for cleanup
- Uses the existing FFI conversion for mempool removal reasons
Good implementation of the previous suggestion to use existing conversions.
468-594: Robust implementation of sync with detailed progress tracking!The implementation properly:
- Uses the callback registry for thread-safe callback management
- Spawns and tracks threads for progress monitoring
- Ensures callbacks are unregistered after completion
- Safely manages CString lifetime (dropped after callback returns)
1407-1459: Great implementation of total balance aggregation!The function now properly:
- Retrieves all watched addresses
- Aggregates balances across all addresses
- Handles individual address failures gracefully with warnings
- Returns the total sum instead of stub data
dash-spv/src/client/mod.rs (15)
13-13: LGTM! Imports align with sequential sync architecture.The new imports support the transition to
SequentialSyncManagerand the addition of mempool tracking, event emission, and ChainLock management functionality.Also applies to: 24-28
50-66: LGTM! Struct fields properly support the sequential sync architecture.The new fields appropriately support:
- Sequential synchronization with
SequentialSyncManager- Event and progress reporting via unbounded channels
- Mempool tracking with
MempoolStateandMempoolFilter- ChainLock management and sync state persistence
All fields appear to be thread-safe with proper
Arc<RwLock<>>wrapping where needed.
69-91: LGTM! Well-designed API for progress and event handling.The methods provide clean interfaces for:
- External consumption of progress updates via
take_progress_receiver- External consumption of events via
take_event_receiver- Internal emission of progress and events with appropriate error handling
The use of
Optiontypes for receivers allows proper ownership transfer to external consumers.
253-263: LGTM! Proper initialization of sync and ChainLock managers.The initialization correctly:
- Creates
SequentialSyncManagerwith existingreceived_filter_heightsfor continuity- Enables validation in
ChainLockManagerfor security- Maintains consistency with the sequential sync architecture
273-305: LGTM! Proper async channel and state initialization.The initialization correctly:
- Creates unbounded channels for progress and event communication
- Uses
Arc<RwLock<>>for thread-safe mempool state- Properly initializes all new fields in the struct
Unbounded channels are appropriate for progress/event notifications where backpressure is not critical.
323-340: LGTM! Conditional mempool filter initialization with proper persistence.The implementation correctly:
- Only initializes mempool filter when
enable_mempool_trackingis true- Uses current watch items for filter configuration
- Loads persisted mempool state when persistence is enabled
- Includes proper error handling with
map_err
381-400: LGTM! Robust sync state restoration with proper error handling.The implementation provides:
- Conditional restoration based on persistence settings
- Comprehensive error handling with fallback to fresh sync
- Cleanup of corrupted state to prevent persistent issues
- Informative logging for different scenarios
454-548: LGTM! Comprehensive mempool functionality with proper balance calculations.The mempool methods provide:
- Runtime enabling of mempool tracking
- Correct balance calculations handling both incoming (positive) and outgoing (negative) transactions
- Safe arithmetic using
saturating_subto prevent underflow- Proper filter updates when watch items change
- Transaction send recording for filtering
The balance calculation correctly distinguishes between incoming and outgoing transactions based on
net_amountsign.
555-599: LGTM! Enhanced shutdown process with proper resource cleanup.The implementation provides:
- Idempotent operation check to prevent duplicate shutdowns
- Sync state persistence before shutdown for consistency
- Graceful error handling - continues shutdown even if state save fails
- Proper sequential resource cleanup (network → storage → running flag)
- Convenient API aliases for
shutdownandstart_sync
646-1004: LGTM! Comprehensive monitoring loop with enhanced progress tracking and error handling.The enhanced monitoring provides:
- Detailed progress tracking with headers per second calculation
- Periodic sync state saving (every 30 seconds) for persistence
- Proper delegation to sequential sync manager for gap handling
- Improved error categorization and recovery strategies
- Timeout-based message receiving for responsive shutdown checking
- Comprehensive status updates with ETA calculations
The implementation properly balances functionality, performance, and maintainability.
1007-1046: LGTM! Well-structured delegation to MessageHandler with proper separation of concerns.The refactoring provides:
- Clean delegation to
MessageHandlerwith all required dependencies- Proper error propagation from the handler
- Appropriate client-level processing for ChainLock and InstantLock messages
- Good separation between message handling and client state management
1147-1167: LGTM! Improved block processing with asynchronous background handling.The implementation provides:
- Non-blocking async block processing for better performance
- Proper use of oneshot channels for coordination
- Immediate return to prevent blocking the monitoring loop
- Appropriate error handling for channel communication failures
1445-1496: LGTM! Comprehensive ChainLock processing with proper validation and state management.The implementation provides:
- Proper validation and storage through
ChainLockManager- Height validation to prevent ChainLock downgrades
- Appropriate delegation of masternode validation to sequential sync
- Client state updates and event emission for external consumers
- Clean separation between validation, storage, and state management
1712-2162: LGTM! Comprehensive sync state persistence with robust validation and recovery strategies.The implementation provides:
- Thorough sync state validation with multiple recovery options
- Efficient batch loading of headers with progress tracking
- Proper bounds checking and validation for rollback/checkpoint operations
- Comprehensive error handling with fallback to fresh sync
- Periodic state saving with intelligent checkpointing
- Clean separation between validation, recovery, and storage operations
The recovery strategies (fresh start, rollback, checkpoint, partial) provide appropriate flexibility for different failure scenarios.
1537-1564: LGTM! Proper integration of mempool filtering with watch item management.The enhanced methods provide:
- Automatic mempool filter updates when watch items change
- Conditional execution based on mempool tracking configuration
- Clean integration with existing
WatchManagerfunctionality- Proper error propagation and handling
dash-spv-ffi/include/dash_spv_ffi.h (12)
6-10: LGTM: Well-designed mempool strategy enum.The
FFIMempoolStrategyenum provides clear options for different mempool tracking approaches with appropriate variant names.
19-27: LGTM: Comprehensive sync stage tracking.The
FFISyncStageenum covers all necessary synchronization phases with clear, descriptive names that will help with user experience and debugging.
52-63: LGTM: Rich sync progress reporting struct.The
FFIDetailedSyncProgressstruct provides comprehensive metrics for sync progress tracking. The field types are appropriate and the stage message usingFFIStringmaintains consistency with the rest of the API.
73-73: LGTM: Useful addition for filter sync capability detection.The
filter_sync_availableboolean field inFFISyncProgressallows clients to determine if filter synchronization is supported, which is valuable for conditional UI logic.
98-99: LGTM: Mempool balance fields enhance wallet functionality.The
mempoolandmempool_instantfields inFFIBalanceprovide essential information for users to understand their pending transactions and instant-send balances.
103-118: Excellent documentation for memory ownership semantics.The comprehensive safety documentation for
FFIArrayclearly explains ownership transfer and caller responsibilities. This is exemplary documentation for FFI interfaces.
162-194: Excellent safety documentation for complex struct.The detailed safety documentation for
FFIUnconfirmedTransactionis comprehensive and clearly explains the memory management responsibilities. The documentation covers all pointer fields and their lifecycle requirements.
261-275: Excellent documentation for limitations and safety.The detailed documentation for
dash_spv_ffi_client_cancel_syncclearly explains the current limitations (incomplete cancellation) and safety requirements. This transparency about implementation limitations is valuable for API consumers.
343-354: LGTM: Well-designed mempool API functions.The mempool-related functions provide essential functionality:
enable_mempool_trackingwith strategy selectionget_balance_with_mempoolfor comprehensive balance informationget_mempool_transaction_countfor transaction monitoringrecord_sendfor tracking outgoing transactionsget_mempool_balancefor address-specific mempool balancesThe function signatures are consistent with the existing API design patterns.
382-401: LGTM: Comprehensive mempool configuration API.The configuration functions provide complete control over mempool behavior:
- Enable/disable tracking
- Strategy selection
- Transaction limits
- Timeout configuration
- Fetch options
- Persistence settings
- Getter functions for current settings
The API is well-balanced with both setters and getters.
410-447: Excellent memory management functions with detailed safety documentation.The destroy functions for
FFIUnconfirmedTransactioncomponents are well-designed:
- Separate functions for different memory regions (raw_tx, addresses)
- Comprehensive destroy function for the entire struct
- Detailed safety documentation explaining proper usage
- Clear warnings about single-use and pointer validity
This follows best practices for FFI memory management.
120-152: ```shell
#!/bin/bashLocate FFIEventCallbacks definitions and usage
rg -C3 "FFIEventCallbacks" .
Search for invocation of each callback across the repo
for cb in on_block on_transaction on_balance_update
on_mempool_transaction_added
on_mempool_transaction_confirmed
on_mempool_transaction_removed; do
echo "===== Searching for callback invocation: $cb ====="
rg -C5 "$cb" .
done</details> </blockquote></details> </details> <!-- This is an auto-generated comment by CodeRabbit for review status -->
🤖 CodeRabbitAI Fixes SummaryI've automatically addressed CodeRabbitAI review comments from this PR. Here's what was done: 📊 Statistics
📝 CommitsThe following commits were created to address the issues: ✅ Verification
🔍 Review Instructions
📋 Summary of Fixes
This summary was generated automatically by |
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (10)
dash-spv/src/network/multi_peer.rs (2)
836-848: Eliminate code duplication inget_last_message_peer_idmethods.The logic for converting
SocketAddrtoPeerIdis duplicated between the instance method and the trait implementation.Extract the conversion logic into a helper method and have both methods call it:
+ /// Helper method to convert SocketAddr to PeerId + fn addr_to_peer_id(addr: SocketAddr) -> crate::types::PeerId { + use std::hash::{Hash, Hasher}; + let mut hasher = std::collections::hash_map::DefaultHasher::new(); + addr.hash(&mut hasher); + crate::types::PeerId(hasher.finish() as u64) + } + /// Get the last message peer as a PeerId pub async fn get_last_message_peer_id(&self) -> crate::types::PeerId { if let Some(addr) = self.get_last_message_peer().await { - // Simple hash-based mapping from SocketAddr to PeerId - use std::hash::{Hash, Hasher}; - let mut hasher = std::collections::hash_map::DefaultHasher::new(); - addr.hash(&mut hasher); - crate::types::PeerId(hasher.finish() as u64) + Self::addr_to_peer_id(addr) } else { // Default to PeerId(0) if no peer available crate::types::PeerId(0) } }Then in the trait implementation:
async fn get_last_message_peer_id(&self) -> crate::types::PeerId { - // Call the instance method to avoid code duplication - self.get_last_message_peer_id().await + self.get_last_message_peer_id().await }Also applies to: 1188-1191
1190-1190: Fix infinite recursion in trait implementation.The trait method calls itself, causing infinite recursion. It should call the instance method instead.
async fn get_last_message_peer_id(&self) -> crate::types::PeerId { - self.get_last_message_peer_id().await + MultiPeerNetworkManager::get_last_message_peer_id(self).await }dash-spv-ffi/src/callbacks.rs (1)
104-116: Safety documentation is now properly included.The unsafe
SendandSyncimplementations now have comprehensive inline documentation explaining the safety assumptions, which addresses the previous concern.dash-spv/src/client/config.rs (1)
176-176: Read timeout default has been increased as suggested.The default
read_timeoutis now set to 100ms, which addresses the previous concern about 15ms being too low.dash-spv/src/sync/masternodes.rs (3)
304-458: Pre-calculated masternode data loading is now implemented.The TODO comment about loading pre-calculated masternode data has been addressed with this comprehensive implementation that properly parses and loads masternode entries from terminal block data.
65-111: Terminal block validation logic has been properly refactored.The duplicated validation logic has been extracted into this reusable
validate_terminal_blockmethod, addressing the previous refactoring suggestion.
368-383: Voting key hash parsing is now properly implemented.The TODO about parsing voting key hash has been addressed. The code now correctly parses the voting address and extracts the PubkeyHash from it with proper error handling.
dash-spv/src/sync/headers2_state.rs (1)
26-26: Magic number has been replaced with named constant.The hardcoded value
80has been properly replaced with theUNCOMPRESSED_HEADER_SIZEconstant as previously suggested.Also applies to: 136-137
dash-spv/src/chain/fork_detector.rs (1)
21-23: Max forks validation has been added as suggested.The constructor now properly validates that
max_forksis greater than 0 with an assertion.dash-spv/src/sync/headers_with_reorg.rs (1)
325-341: Critical: Chain reorganization is still not implemented.The reorganization logic is explicitly skipped with a warning log, and the fork is simply removed instead of being processed. This means the core functionality of this module remains broken.
🧹 Nitpick comments (3)
dash-spv-ffi/src/callbacks.rs (2)
135-142: Consider reducing logging verbosity for production.The logging uses emojis and is quite verbose, which might be excessive for production environments. Consider using debug level for routine operations or making the verbosity configurable.
pub fn call_block(&self, height: u32, hash: &str) { if let Some(callback) = self.on_block { - tracing::info!("🎯 Calling block callback: height={}, hash={}", height, hash); + tracing::debug!("Calling block callback: height={}, hash={}", height, hash); let c_hash = CString::new(hash).unwrap_or_else(|_| CString::new("").unwrap()); callback(height, c_hash.as_ptr(), self.user_data); - tracing::info!("✅ Block callback completed"); + tracing::debug!("Block callback completed"); } else { - tracing::warn!("⚠️ Block callback not set"); + tracing::debug!("Block callback not set"); } }
144-156: Apply consistent logging approach.Similar to the block callback, consider reducing verbosity here as well.
pub fn call_transaction(&self, txid: &str, confirmed: bool, amount: i64, addresses: &[String], block_height: Option<u32>) { if let Some(callback) = self.on_transaction { - tracing::info!("🎯 Calling transaction callback: txid={}, confirmed={}, amount={}, addresses={:?}", + tracing::debug!("Calling transaction callback: txid={}, confirmed={}, amount={}, addresses={:?}", txid, confirmed, amount, addresses); let c_txid = CString::new(txid).unwrap_or_else(|_| CString::new("").unwrap()); let addresses_str = addresses.join(","); let c_addresses = CString::new(addresses_str).unwrap_or_else(|_| CString::new("").unwrap()); callback(c_txid.as_ptr(), confirmed, amount, c_addresses.as_ptr(), block_height.unwrap_or(0), self.user_data); - tracing::info!("✅ Transaction callback completed"); + tracing::debug!("Transaction callback completed"); } else { - tracing::warn!("⚠️ Transaction callback not set"); + tracing::debug!("Transaction callback not set"); } }dash-spv/src/sync/headers_with_reorg.rs (1)
419-425: Good fix for GetHeaders2 support. Consider implementing the timeout mechanism.The previous issue with GetHeaders2 being permanently disabled has been resolved. The fallback to regular GetHeaders on send failure is appropriate.
The TODO comment correctly identifies the need for request tracking and timeout handling. This would improve robustness when peers don't respond to GetHeaders2 requests.
Would you like me to help implement the timeout and fallback mechanism for GetHeaders2 requests?
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (7)
dash-spv-ffi/src/callbacks.rs(2 hunks)dash-spv/src/chain/fork_detector.rs(1 hunks)dash-spv/src/client/config.rs(8 hunks)dash-spv/src/network/multi_peer.rs(27 hunks)dash-spv/src/sync/headers2_state.rs(1 hunks)dash-spv/src/sync/headers_with_reorg.rs(1 hunks)dash-spv/src/sync/masternodes.rs(9 hunks)
🔇 Additional comments (18)
dash-spv/src/network/multi_peer.rs (15)
3-3: Excellent integration of reputation management imports.The new imports for
HashMap,MempoolStrategy, and reputation management components are well-organized and support the enhanced peer management functionality.Also applies to: 19-19, 26-26
40-41: Well-designed struct field additions for comprehensive peer management.The new fields appropriately extend the manager with reputation tracking, mempool strategy awareness, and enhanced peer state management. The field organization is logical and supports the expanded functionality.
Also applies to: 58-65
74-92: Robust directory creation and error handling.The reputation data loading logic properly ensures directory existence before file operations and handles errors gracefully with appropriate logging. This addresses the previous review concern effectively.
157-174: Excellent reputation-based connection filtering.The pre-connection reputation check prevents wasting resources on known bad peers, and the connection attempt recording provides valuable metrics for reputation scoring.
190-242: Comprehensive reputation tracking for connection outcomes.The reputation updates for successful connections, handshake failures, and connection failures provide nuanced scoring that will improve peer selection over time. The different penalty levels are well-calibrated.
296-298: Enhanced message logging improves network debugging.The detailed logging for received messages, especially headers and unknown commands, will significantly aid in troubleshooting network issues and protocol understanding.
Also applies to: 365-390
305-312: Proper handling of SendHeaders2 protocol extension.The implementation correctly processes the SendHeaders2 message to track peer capabilities for compressed headers support, which is essential for efficient synchronization.
416-441: Intelligent reputation penalties for network errors.The differentiated reputation scoring for timeouts vs. invalid data appropriately reflects the severity of different error types, helping to identify truly misbehaving peers.
487-495: Positive reputation for stable connections encourages good peers.Rewarding peers that maintain long connections balances the penalty system and helps identify reliable network participants.
552-568: Smart peer selection using reputation scores.The maintenance loop's use of reputation-based peer selection will gradually improve network connectivity by preferring reliable peers over unreliable ones.
647-707: Sophisticated peer selection for different message types.The logic to select peers based on compact filter support for filter-related messages, while maintaining sticky sync peer selection for consistency, demonstrates excellent protocol awareness.
822-848: Useful peer management API additions.The new methods for reputation queries, last message peer tracking, and manual peer banning provide valuable tools for network management and debugging.
1105-1139: Well-implemented peer height discovery with comprehensive logging.The method properly queries all connected peers and returns the maximum height, with detailed debug logging that will aid in troubleshooting synchronization issues.
1141-1172: Efficient service flag checking across all peers.The implementation correctly checks peer service capabilities and returns appropriate results for both boolean queries and detailed peer lists.
1174-1186: Proper headers2 capability detection.The method correctly identifies peers that support compressed headers, which is essential for efficient synchronization with capable peers.
dash-spv/src/client/config.rs (1)
12-21: Well-designed mempool strategy enum.The
MempoolStrategyenum clearly documents the tradeoffs between privacy and bandwidth for each strategy.dash-spv/src/sync/headers2_state.rs (1)
1-266: Well-implemented compressed headers state management.The module provides comprehensive per-peer compression state management with detailed statistics tracking and good test coverage.
dash-spv/src/chain/fork_detector.rs (1)
1-295: Comprehensive fork detection implementation.The fork detector provides robust blockchain fork detection with proper handling of edge cases, fork limits, and comprehensive test coverage.
dash-spv-ffi/src/callbacks.rs
Outdated
|
|
||
| pub fn call_mempool_transaction_added(&self, txid: &str, amount: i64, addresses: &[String], is_instant_send: bool) { | ||
| if let Some(callback) = self.on_mempool_transaction_added { | ||
| tracing::info!("🎯 Calling mempool transaction added callback: txid={}, amount={}, is_instant_send={}", | ||
| txid, amount, is_instant_send); | ||
| let c_txid = CString::new(txid).unwrap_or_else(|_| CString::new("").unwrap()); | ||
| let addresses_str = addresses.join(","); | ||
| let c_addresses = CString::new(addresses_str).unwrap_or_else(|_| CString::new("").unwrap()); | ||
| callback(c_txid.as_ptr(), amount, c_addresses.as_ptr(), is_instant_send, self.user_data); | ||
| tracing::info!("✅ Mempool transaction added callback completed"); | ||
| } else { | ||
| tracing::debug!("Mempool transaction added callback not set"); | ||
| } | ||
| } | ||
|
|
||
| pub fn call_mempool_transaction_confirmed(&self, txid: &str, block_height: u32, block_hash: &str) { | ||
| if let Some(callback) = self.on_mempool_transaction_confirmed { | ||
| tracing::info!("🎯 Calling mempool transaction confirmed callback: txid={}, height={}, hash={}", | ||
| txid, block_height, block_hash); | ||
| let c_txid = CString::new(txid).unwrap_or_else(|_| CString::new("").unwrap()); | ||
| let c_hash = CString::new(block_hash).unwrap_or_else(|_| CString::new("").unwrap()); | ||
| callback(c_txid.as_ptr(), block_height, c_hash.as_ptr(), self.user_data); | ||
| tracing::info!("✅ Mempool transaction confirmed callback completed"); | ||
| } else { | ||
| tracing::debug!("Mempool transaction confirmed callback not set"); | ||
| } | ||
| } | ||
|
|
||
| pub fn call_mempool_transaction_removed(&self, txid: &str, reason: u8) { | ||
| if let Some(callback) = self.on_mempool_transaction_removed { | ||
| tracing::info!("🎯 Calling mempool transaction removed callback: txid={}, reason={}", txid, reason); | ||
| let c_txid = CString::new(txid).unwrap_or_else(|_| CString::new("").unwrap()); | ||
| callback(c_txid.as_ptr(), reason, self.user_data); | ||
| tracing::info!("✅ Mempool transaction removed callback completed"); | ||
| } else { | ||
| tracing::debug!("Mempool transaction removed callback not set"); | ||
| } | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Standardize logging levels across callback methods.
The logging for "callback not set" is inconsistent - some methods use warn! while the new mempool methods use debug!. Consider using debug! consistently for all "callback not set" messages since it's a normal condition when callbacks are optional.
Additionally, apply the same logging verbosity reduction:
pub fn call_mempool_transaction_added(&self, txid: &str, amount: i64, addresses: &[String], is_instant_send: bool) {
if let Some(callback) = self.on_mempool_transaction_added {
- tracing::info!("🎯 Calling mempool transaction added callback: txid={}, amount={}, is_instant_send={}",
+ tracing::debug!("Calling mempool transaction added callback: txid={}, amount={}, is_instant_send={}",
txid, amount, is_instant_send);
let c_txid = CString::new(txid).unwrap_or_else(|_| CString::new("").unwrap());
let addresses_str = addresses.join(",");
let c_addresses = CString::new(addresses_str).unwrap_or_else(|_| CString::new("").unwrap());
callback(c_txid.as_ptr(), amount, c_addresses.as_ptr(), is_instant_send, self.user_data);
- tracing::info!("✅ Mempool transaction added callback completed");
+ tracing::debug!("Mempool transaction added callback completed");
} else {
tracing::debug!("Mempool transaction added callback not set");
}
}Apply similar changes to call_mempool_transaction_confirmed and call_mempool_transaction_removed.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| pub fn call_mempool_transaction_added(&self, txid: &str, amount: i64, addresses: &[String], is_instant_send: bool) { | |
| if let Some(callback) = self.on_mempool_transaction_added { | |
| tracing::info!("🎯 Calling mempool transaction added callback: txid={}, amount={}, is_instant_send={}", | |
| txid, amount, is_instant_send); | |
| let c_txid = CString::new(txid).unwrap_or_else(|_| CString::new("").unwrap()); | |
| let addresses_str = addresses.join(","); | |
| let c_addresses = CString::new(addresses_str).unwrap_or_else(|_| CString::new("").unwrap()); | |
| callback(c_txid.as_ptr(), amount, c_addresses.as_ptr(), is_instant_send, self.user_data); | |
| tracing::info!("✅ Mempool transaction added callback completed"); | |
| } else { | |
| tracing::debug!("Mempool transaction added callback not set"); | |
| } | |
| } | |
| pub fn call_mempool_transaction_confirmed(&self, txid: &str, block_height: u32, block_hash: &str) { | |
| if let Some(callback) = self.on_mempool_transaction_confirmed { | |
| tracing::info!("🎯 Calling mempool transaction confirmed callback: txid={}, height={}, hash={}", | |
| txid, block_height, block_hash); | |
| let c_txid = CString::new(txid).unwrap_or_else(|_| CString::new("").unwrap()); | |
| let c_hash = CString::new(block_hash).unwrap_or_else(|_| CString::new("").unwrap()); | |
| callback(c_txid.as_ptr(), block_height, c_hash.as_ptr(), self.user_data); | |
| tracing::info!("✅ Mempool transaction confirmed callback completed"); | |
| } else { | |
| tracing::debug!("Mempool transaction confirmed callback not set"); | |
| } | |
| } | |
| pub fn call_mempool_transaction_removed(&self, txid: &str, reason: u8) { | |
| if let Some(callback) = self.on_mempool_transaction_removed { | |
| tracing::info!("🎯 Calling mempool transaction removed callback: txid={}, reason={}", txid, reason); | |
| let c_txid = CString::new(txid).unwrap_or_else(|_| CString::new("").unwrap()); | |
| callback(c_txid.as_ptr(), reason, self.user_data); | |
| tracing::info!("✅ Mempool transaction removed callback completed"); | |
| } else { | |
| tracing::debug!("Mempool transaction removed callback not set"); | |
| } | |
| } | |
| pub fn call_mempool_transaction_added(&self, txid: &str, amount: i64, addresses: &[String], is_instant_send: bool) { | |
| if let Some(callback) = self.on_mempool_transaction_added { | |
| tracing::debug!("Calling mempool transaction added callback: txid={}, amount={}, is_instant_send={}", | |
| txid, amount, is_instant_send); | |
| let c_txid = CString::new(txid).unwrap_or_else(|_| CString::new("").unwrap()); | |
| let addresses_str = addresses.join(","); | |
| let c_addresses = CString::new(addresses_str).unwrap_or_else(|_| CString::new("").unwrap()); | |
| callback(c_txid.as_ptr(), amount, c_addresses.as_ptr(), is_instant_send, self.user_data); | |
| tracing::debug!("Mempool transaction added callback completed"); | |
| } else { | |
| tracing::debug!("Mempool transaction added callback not set"); | |
| } | |
| } |
🤖 Prompt for AI Agents
In dash-spv-ffi/src/callbacks.rs between lines 167 and 204, the logging level
for "callback not set" messages is inconsistent across the mempool callback
methods. Change all "callback not set" logs in
call_mempool_transaction_confirmed and call_mempool_transaction_removed to use
tracing::debug! instead of tracing::warn! to match the style used in
call_mempool_transaction_added, ensuring consistent and appropriate verbosity
for optional callbacks.
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 4
♻️ Duplicate comments (6)
dash-spv/src/storage/disk.rs (1)
385-409: Excellent implementation of sentinel header padding!The sentinel headers now use clearly distinguishable patterns (max values and 0xFF patterns) that cannot be mistaken for valid block headers. This directly addresses the previous concern about confusion with all-zero padding.
dash-spv/src/chain/fork_detector.rs (1)
142-194: Proper fork management with effective capacity control.The fork management implementation correctly handles:
- Capacity limits with weakest-fork eviction based on chain work
- Proper re-indexing when forks are updated
- Comprehensive utility methods for fork access
The constructor validation (line 22) appropriately addresses the previous review concern about ensuring
max_forks > 0.dash-spv/src/chain/chain_tip.rs (1)
75-105: Atomic tip extension properly addresses previous data loss concerns.The
extend_tipmethod correctly implements atomic behavior by:
- Temporarily storing the old tip before removal (line 92)
- Attempting to add the new tip and handling potential failures
- Restoring the old tip if the new tip addition fails (lines 98-102)
This implementation successfully addresses the previous review concern about potential data loss during tip extension operations.
dash-spv/src/sync/headers_with_reorg.rs (3)
297-368: Successfully resolves the critical borrow conflict issue! 🎉This implementation properly addresses the borrow conflict that was preventing reorganizations:
- Read phase uses
SyncStorageAdapter(line 306) for immutable access- Necessary data is cloned before the write phase (lines 313-314)
- Reorganization is now properly executed (lines 325-333) instead of being skipped
- Clean separation between
ChainStorageandStorageManagerusageThis is a significant improvement that enables the core reorg functionality.
370-425: GetHeaders2 support is now properly enabled! ✅The problematic
false &&condition has been removed, enabling compressed header support. Good implementation with:
- Proper fallback to regular GetHeaders on failure (lines 405-413)
- Clear TODO comments about future timeout mechanism (lines 397-403)
The TODO for timeout and fallback mechanism is acknowledged and would be a good future enhancement.
725-850: Clever adapter pattern solves the async/sync impedance mismatchThe
SyncStorageAdapteris well-implemented:
- Correct use of
tokio::task::block_in_placefor async-in-sync execution- Comprehensive trace logging for debugging
- Clear error message for the
store_headerlimitation (lines 833-838)- Proper error propagation throughout
This adapter is crucial for enabling the fork detection logic to work with async storage.
🧹 Nitpick comments (9)
dash-spv-ffi/tests/test_wallet.rs (1)
78-79: Good alignment with new Balance struct, but enhance test coverage for mempool fields.The addition of
mempoolandmempool_instantfields correctly matches the extended Balance struct definition. However, the test doesn't verify that these new fields are properly converted to the FFI structure.Consider enhancing the test coverage by adding assertions for the new fields:
let ffi_balance = FFIBalance::from(balance); assert_eq!(ffi_balance.confirmed, 100000); assert_eq!(ffi_balance.pending, 50000); assert_eq!(ffi_balance.instantlocked, 25000); +assert_eq!(ffi_balance.mempool, 0); +assert_eq!(ffi_balance.mempool_instant, 0); assert_eq!(ffi_balance.total, 175000);Additionally, consider adding a separate test case with non-zero mempool values to verify the conversion behavior:
#[test] #[serial] fn test_balance_conversion_with_mempool() { let balance = dash_spv::Balance { confirmed: dashcore::Amount::from_sat(100000), pending: dashcore::Amount::from_sat(50000), instantlocked: dashcore::Amount::from_sat(25000), mempool: dashcore::Amount::from_sat(10000), mempool_instant: dashcore::Amount::from_sat(5000), }; let ffi_balance = FFIBalance::from(balance); assert_eq!(ffi_balance.mempool, 10000); assert_eq!(ffi_balance.mempool_instant, 5000); // Verify total calculation behavior with mempool amounts }dash-spv/src/error.rs (1)
111-112: Consider the error hierarchy complexity.Adding
ValidationError::StorageErrorcreates nested error wrapping sinceSpvErroralready has bothValidationandStoragevariants. This could lead to confusing error propagation paths like:
SpvError::Storage(StorageError::...)SpvError::Validation(ValidationError::StorageError(StorageError::...))Consider whether validation operations should directly return storage errors via the existing
SpvErrorhierarchy instead of wrapping them.dash-spv/src/network/mock.rs (3)
45-66: Clarify height indexing documentation and consider parameterizing hardcoded values.The comment "Skip genesis (height 0)" could be clearer about the intended height mapping. With
count=10, this generates 9 headers for heights 1-9, assuming genesis is handled elsewhere.Consider making the hardcoded header values (time offset, bits, etc.) configurable for more flexible testing scenarios.
- /// Add a chain of headers for testing + /// Add a chain of headers for testing starting from height 1 + /// Genesis (height 0) is assumed to be handled by ChainState pub fn add_headers_chain(&mut self, genesis_hash: BlockHash, count: usize) {
118-132: Document the limited message processing behavior.The
send_messagemethod only processesGetHeadersrequests and ignores other message types. Consider documenting this behavior or adding a comment explaining why other messages are not handled in this mock implementation.async fn send_message(&mut self, message: NetworkMessage) -> NetworkResult<()> { if !self.connected { return Err(NetworkError::NotConnected); } + // Mock only processes GetHeaders requests for header chain simulation // Process GetHeaders requests if let NetworkMessage::GetHeaders(ref getheaders) = message {
158-166: Consider parameterizing hardcoded mock values for testing flexibility.The mock peer info uses hardcoded values (address, version, user agent) which may limit testing scenarios. Consider making these configurable through the constructor or setter methods for more comprehensive testing.
This would allow tests to simulate different peer configurations and network conditions.
dash-spv/docs/TERMINAL_BLOCKS.md (1)
28-31: Add language specification to fenced code blocks.The markdown linter suggests adding language specifications to improve rendering. Consider adding
textorconsoleto these comparison blocks.-``` +```text Request: Genesis (0) → Current (1,276,272) Diff size: ~500MB, covering 1.2M blocksAnd:
-``` +```text Request: Terminal Block (900,000) → Current (1,276,272) Diff size: ~100MB, covering 376K blocksAlso applies to: 34-37
dash-spv/src/chain/reorg.rs (3)
80-127: Consider optimizing the fallback chain lock checkingThe method logic is correct, but when
chain_lock_manageris not available, the fallback iterates through every block from fork height to current tip. For deep chains, this could be expensive.Consider adding a maximum iteration limit or checking only key blocks (e.g., every 100th block) in the fallback path.
175-215: Transaction collection is not yet implementedThe method correctly collects headers but leaves transaction collection as a TODO (lines 204-205). This means wallet state may not be fully updated during reorgs until transaction storage is implemented.
Would you like me to help implement transaction collection once the transaction storage layer is available?
427-486: Consider expanding test coverageThe current tests cover basic validation scenarios well. Consider adding tests for:
- Chain lock validation during reorg
- Actual reorg execution (not just validation)
- Common ancestor finding logic
- Edge cases like orphan chains
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (20)
dash-spv-ffi/src/wallet.rs(5 hunks)dash-spv-ffi/tests/test_event_callbacks.rs(1 hunks)dash-spv-ffi/tests/test_wallet.rs(1 hunks)dash-spv-ffi/tests/unit/test_async_operations.rs(11 hunks)dash-spv-ffi/tests/unit/test_client_lifecycle.rs(7 hunks)dash-spv/docs/TERMINAL_BLOCKS.md(1 hunks)dash-spv/examples/reorg_demo.rs(1 hunks)dash-spv/src/chain/chain_tip.rs(1 hunks)dash-spv/src/chain/fork_detector.rs(1 hunks)dash-spv/src/chain/mod.rs(1 hunks)dash-spv/src/chain/orphan_pool.rs(1 hunks)dash-spv/src/chain/reorg.rs(1 hunks)dash-spv/src/chain/reorg_test.rs(1 hunks)dash-spv/src/error.rs(6 hunks)dash-spv/src/mempool_filter.rs(1 hunks)dash-spv/src/network/mock.rs(1 hunks)dash-spv/src/storage/disk.rs(15 hunks)dash-spv/src/storage/sync_state.rs(1 hunks)dash-spv/src/sync/headers2_state.rs(1 hunks)dash-spv/src/sync/headers_with_reorg.rs(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (7)
- dash-spv-ffi/tests/unit/test_client_lifecycle.rs
- dash-spv-ffi/tests/test_event_callbacks.rs
- dash-spv-ffi/src/wallet.rs
- dash-spv/src/chain/mod.rs
- dash-spv/src/sync/headers2_state.rs
- dash-spv/src/chain/orphan_pool.rs
- dash-spv/src/storage/sync_state.rs
🧰 Additional context used
🧠 Learnings (3)
dash-spv/src/error.rs (1)
Learnt from: QuantumExplorer
PR: dashpay/rust-dashcore#51
File: dash/src/sml/masternode_list_entry/hash.rs:7-12
Timestamp: 2025-02-25T06:19:32.230Z
Learning: The `consensus_encode` method on `MasternodeListEntry` writing to a `Vec` buffer cannot fail, so using `.expect()` is appropriate rather than propagating the error with the `?` operator.
dash-spv/src/mempool_filter.rs (1)
Learnt from: QuantumExplorer
PR: dashpay/rust-dashcore#74
File: key-wallet/src/address.rs:30-40
Timestamp: 2025-06-15T15:31:44.136Z
Learning: In the key-wallet crate, QuantumExplorer prefers keeping wildcard arms that default unknown Network variants to testnet prefixes rather than using exhaustive matches or panics. This fallback behavior is intentional.
dash-spv/src/storage/disk.rs (2)
Learnt from: QuantumExplorer
PR: dashpay/rust-dashcore#51
File: dash/src/sml/masternode_list_entry/hash.rs:7-12
Timestamp: 2025-02-25T06:19:32.230Z
Learning: The `consensus_encode` method on `MasternodeListEntry` writing to a `Vec` buffer cannot fail, so using `.expect()` is appropriate rather than propagating the error with the `?` operator.
Learnt from: QuantumExplorer
PR: dashpay/rust-dashcore#51
File: dash/src/sml/masternode_list/scores_for_quorum.rs:50-73
Timestamp: 2025-02-25T06:13:52.858Z
Learning: ScoreHash is a cryptographic hash in the rust-dashcore library, and therefore does not need collision handling when used as a key in collections like BTreeMap due to the extremely low probability of collisions.
🧬 Code Graph Analysis (3)
dash-spv/examples/reorg_demo.rs (3)
dash-spv/src/chain/reorg.rs (2)
create_test_header(435-441)new(60-66)dash-spv/src/chain/reorg_test.rs (1)
create_test_header(13-19)dash-spv/src/types.rs (1)
new_for_network(187-220)
dash-spv/src/network/mock.rs (2)
dash-spv/src/network/mod.rs (37)
new(110-121)as_any(36-36)as_any(126-128)connect(39-39)connect(130-148)disconnect(42-42)disconnect(150-156)send_message(45-45)send_message(158-165)receive_message(48-48)receive_message(167-174)is_connected(51-51)is_connected(176-178)peer_count(54-54)peer_count(180-186)peer_info(57-57)peer_info(188-194)send_ping(60-60)send_ping(196-203)handle_ping(63-63)handle_ping(205-212)handle_pong(66-66)handle_pong(214-221)should_ping(69-69)should_ping(223-225)cleanup_old_pings(72-72)cleanup_old_pings(227-231)get_message_sender(75-75)get_message_sender(233-235)get_peer_best_height(78-78)get_peer_best_height(237-247)has_peer_with_service(81-81)has_peer_with_service(249-258)get_peers_with_service(84-84)get_peers_with_service(260-273)get_last_message_peer_id(93-95)get_last_message_peer_id(275-282)dash-spv/src/types.rs (1)
address(436-441)
dash-spv/src/chain/chain_tip.rs (2)
dash-spv/src/sync/headers_with_reorg.rs (2)
new(66-97)new(731-733)dash-spv/src/chain/chain_work.rs (2)
from_bytes(81-83)as_bytes(76-78)
🪛 markdownlint-cli2 (0.17.2)
dash-spv/docs/TERMINAL_BLOCKS.md
28-28: Fenced code blocks should have a language specified
null
(MD040, fenced-code-language)
34-34: Fenced code blocks should have a language specified
null
(MD040, fenced-code-language)
🔇 Additional comments (42)
dash-spv-ffi/tests/unit/test_async_operations.rs (9)
4-4: LGTM!The import for
FFIDetailedSyncProgressaligns with the enhanced synchronization progress reporting introduced in the FFI layer.
22-33: LGTM! Proper handling of the new detailed progress struct.The callback correctly handles the new
FFIDetailedSyncProgresspointer with appropriate null checking before dereferencing. The unsafe block is properly scoped to the pointer dereference operation.
85-92: Good adjustment to avoid hanging the test.The modification correctly avoids calling
sync_to_tipon an unstarted client, which would cause the test to hang. The test still validates null callback safety through the destruction path, maintaining the original test intent.
105-130: Smart refactoring to avoid test hangs while maintaining coverage.The test correctly avoids calling
sync_to_tipon an unstarted client and instead usesget_sync_progressto test null user_data handling. The proper cleanup of the sync progress object with null checking demonstrates good memory management practices.
134-134: Appropriate test configuration for network-dependent tests.Good practice to mark network-dependent tests with
#[ignore]to prevent CI failures. The updated function call correctly uses the newsync_to_tip_with_progressAPI that accepts separate progress and completion callbacks.Also applies to: 148-148
164-164: Consistent API usage for completion-only callbacks.The test correctly uses
dash_spv_ffi_client_sync_to_tipwhen only completion callback is needed, demonstrating proper API usage patterns.Also applies to: 181-181
241-266: Excellent safety improvements to prevent runtime-within-runtime issues.The modifications correctly avoid calling FFI functions from within callbacks, which could cause runtime panics. The simulation approach maintains test coverage while preventing potential deadlocks or runtime conflicts.
334-338: LGTM! Enhanced transaction callback with additional context.The updated callback signature properly handles the new parameters (amount, addresses, block_height) that provide richer transaction event information to clients.
345-354: Correct initialization of extended event callbacks struct.The struct properly includes the new mempool-related callbacks (
on_mempool_transaction_added,on_mempool_transaction_confirmed,on_mempool_transaction_removed) set toNone, maintaining compatibility with the enhanced FFI interface while not requiring mempool functionality for this test.dash-spv/src/error.rs (4)
26-28: LGTM! General error variant is a useful addition.The
General(String)variant provides a catch-all for errors that don't fit into specific categories, which is a common and useful pattern.
49-50: LGTM! NotConnected variant enhances connection state tracking.This addition allows for more precise error handling when operations are attempted on disconnected network clients.
79-84: LGTM! Storage error variants are well-designed.The
InconsistentStateandLockPoisonedvariants address common storage layer issues and provide clear error semantics.
190-215: Excellent test coverage for error categorization.The unit tests comprehensively cover all error variants and categorization logic, including edge cases and fallback behavior. This provides good confidence in the categorization method's correctness.
dash-spv/src/network/mock.rs (2)
1-29: LGTM! Well-structured mock network manager.The imports are appropriate and the struct design effectively uses async patterns with proper separation of concerns for connection state, message handling, and header chain management.
68-98: Excellent implementation of GetHeaders protocol logic.The locator hash matching and header slice calculation correctly implements the Bitcoin protocol specification, including the 2000-header limit and proper edge case handling.
dash-spv/src/storage/disk.rs (7)
18-18: LGTM! Well-structured additions for new functionality.The new imports and struct fields appropriately support the expanded storage capabilities including mempool tracking and sentinel header management.
Also applies to: 23-24, 83-83, 125-126
900-908: Proper sentinel header exclusion from disk persistence.The save function correctly identifies and skips sentinel headers using the same patterns used to create them, ensuring only real block headers are persisted to disk.
1010-1074: Well-designed logging with appropriate verbosity levels.The tiered logging approach (trace/debug/info based on batch size) provides useful debugging information while preventing log spam during large operations.
1114-1164: Correct implementation of sentinel header exclusion in retrieval methods.The updated methods properly use
valid_countto exclude sentinel headers from results, ensuring callers only receive actual block headers. The bounds checking and logging are appropriate.
1533-1804: Comprehensive persistent storage implementation with good practices.The new storage methods use appropriate serialization formats, atomic writes via temporary files, and proper error handling. The distinction between JSON (human-readable) and binary formats is well-chosen.
1806-1879: Essential test coverage for sentinel header functionality.The new tests properly verify that sentinel headers are excluded from both retrieval operations and disk persistence, providing confidence in the implementation.
1771-1803: Appropriate mempool storage implementation.The in-memory HashMap approach is suitable for mempool data, with proper async patterns and complete CRUD operations for mempool transaction and state management.
dash-spv/docs/TERMINAL_BLOCKS.md (1)
1-147: Excellent comprehensive documentation for the terminal blocks system.This documentation thoroughly covers the concept, benefits, implementation details, security considerations, and usage of the terminal blocks system. The structure is logical, examples are helpful, and the coverage of limitations and future enhancements demonstrates thoughtful design.
dash-spv/src/mempool_filter.rs (5)
14-44: Well-structured implementation with clear field organization.The
MempoolFilterstruct design is clean with appropriate field types and the constructor properly initializes all components. The use ofArc<RwLock<MempoolState>>for shared state management is appropriate for async operations.
46-65: Strategy-based filtering logic is well-implemented.The method correctly handles each mempool strategy with appropriate logic:
FetchAll: Properly enforces capacity limitsBloomFilter: Correctly delegates to network layerSelective: Uses recent send tracking as intendedThe async operation with read locks is handled appropriately.
67-119: Comprehensive transaction relevance checking with excellent tracing.The method properly handles all watch item types (
Address,Script,Outpoint) with correct matching logic. The detailed tracing statements will be valuable for debugging. The early return optimization when relevance is found is efficient.
121-204: Robust transaction processing with proper fallback handling.The method implements comprehensive transaction processing with good error handling:
- Proper fee calculation with fallback to partial calculation and then zero
- Correct InstantSend detection using wallet integration
- Appropriate handling of different mempool strategies
- Good separation of concerns between fetching and callback triggering for
FetchAllstrategyThe implementation addresses the complexity well with clear logic flow.
206-850: Excellent test coverage with comprehensive scenario testing.The helper methods are well-implemented and the test suite is thorough:
- All mempool strategies covered
- All watch item types tested (address, script, outpoint)
- Capacity limits and eviction tested
- Mock implementations properly simulate wallet behavior
- Edge cases like multiple watch items and fetch-all strategy handling
The ignored tests appropriately indicate integration requirements with real
Walletimplementation.dash-spv/examples/reorg_demo.rs (1)
1-106: Well-structured demonstration with clear educational value.This demo effectively illustrates the chain reorganization functionality with:
- Proper setup of test components and chains
- Clear progression from main chain to fork creation
- Good error handling and informative output messages
- Correct async operation handling
The maximum chain work value (
[255u8; 32]) ensures the fork wins the reorganization, which is appropriate for demonstration purposes.dash-spv/src/chain/reorg_test.rs (1)
1-180: Comprehensive test coverage for reorganization functionality.The test suite effectively covers key reorganization scenarios:
- Basic reorganization without borrow conflicts
- Common ancestor detection
- Deep reorganization with multiple block disconnections/connections
Each test properly verifies:
- Reorganization success/failure
- Event details and metadata
- Chain state consistency
- Storage persistence
The async test patterns and assertions are well-implemented.
dash-spv/src/chain/fork_detector.rs (2)
29-140: Complex but well-structured fork detection logic with comprehensive scenario handling.The
check_headermethod correctly handles all fork detection scenarios:
- Main chain extensions with proper tip validation
- Genesis connection edge cases for initial sync
- Existing fork extensions with proper re-indexing
- New fork creation from storage or chain state connections
- Orphan detection for unconnected headers
The special genesis handling (lines 60-71) is crucial for correct initial synchronization behavior. The fork re-indexing logic (lines 81-91) properly maintains the hash-based lookup structure.
209-301: Comprehensive test coverage validating all fork detection scenarios.The test suite effectively covers:
- Main chain extension detection
- Fork creation and extension scenarios
- Orphan block identification
- Capacity limit enforcement with proper eviction
- Constructor validation for invalid parameters
The test assertions properly verify the expected
ForkDetectionResultvariants and fork state changes.dash-spv/src/chain/chain_tip.rs (3)
10-73: Well-designed tip management with proper capacity handling.The
ChainTipandChainTipManagerstructures are appropriately designed:
ChainTipcontains all necessary metadata including chain work and active statusadd_tipcorrectly handles capacity constraints with eviction before insertion- Automatic active tip updates ensure the best chain is always tracked
The implementation properly manages the hash-based indexing for efficient lookups.
107-191: Robust utility methods with proper active tip management.The utility methods are well-implemented with appropriate safety measures:
- Active tip updates correctly maintain the best chain selection
- Eviction logic protects the active tip from removal
- Proper error handling for edge cases (e.g., tip not found)
- Clean state management in
clear()methodThe chain work-based comparison for tip selection ensures the strongest chain is always active.
193-324: Thorough test coverage validating tip management and atomicity.The test suite comprehensively covers:
- Basic tip addition and active tip selection based on chain work
- Capacity management with proper eviction of weakest tips
- Atomic tip extension operations ensuring consistency
- Edge cases and state transitions
The atomic tests (lines 254-323) specifically validate the fixes for the previous data loss concerns in tip extension operations.
dash-spv/src/chain/reorg.rs (4)
15-46: Well-structured reorg event data typesThe
ReorgEventandReorgDatastructs are well-designed with clear documentation and appropriate visibility. Good use of conditional compilation for test-only derives.
58-78: Clean constructor pattern for ReorgManagerGood design with two constructors that prevent misconfiguration - the chain lock version automatically enables chain lock respect.
129-152: Excellent solution to the borrow conflict issue!The two-phase reorganization approach cleanly separates read and write operations, effectively solving the borrow conflict issue that was blocking reorg functionality. This is a significant improvement over the previous implementation.
217-267: Robust reorg application with proper fallback handlingExcellent implementation that handles both scenarios - with and without wallet rollback support. The fallback to marking transactions as unconfirmed ensures wallet consistency even without full rollback capability.
dash-spv/src/sync/headers_with_reorg.rs (3)
99-166: Efficient batch loading with good progress trackingWell-implemented header loading with:
- Reasonable batch size (10,000 headers)
- Progress logging every 50,000 headers
- Performance metrics in the final log
Good approach for handling large blockchain histories.
223-295: Robust checkpoint enforcement prevents deep reorgsExcellent implementation of checkpoint validation that:
- Validates headers against checkpoints during normal sync (lines 246-253)
- Prevents forks that would reorganize past checkpoints (lines 272-279)
- Properly handles the genesis edge case (line 271)
This provides strong protection against deep chain reorganizations.
612-636: Checkpoint fast-sync not yet implementedThe
get_sync_starting_pointmethod has a TODO (line 616) indicating checkpoint-based fast sync is planned but not implemented.Also, the 1000 block threshold in
can_skip_to_checkpoint(line 624) seems arbitrary. Consider making this configurable or documenting the rationale.What is the recommended minimum distance for checkpoint-based fast sync in SPV clients to ensure security?
There was a problem hiding this comment.
Actionable comments posted: 37
🔭 Outside diff range comments (8)
swift-dash-core-sdk/Sources/DashSPVFFI/include/dash_spv_ffi.h (3)
182-214: FFIUnconfirmedTransaction ownership docs are contradictory and risk misuseThe docs state raw_tx and addresses “must be allocated by the caller before passing to Rust,” yet you provide dedicated destroy functions that imply Rust allocated and transfers ownership to C. This ambiguity is dangerous in FFI and can cause leaks or double-frees.
- Clarify the ownership model unambiguously.
- If Rust allocates, document that clearly and keep the destroy functions.
- If the caller allocates, remove destroy functions or clearly state they do not free caller-owned memory.
I can help propose doc updates aligned with the actual implementation.
125-133: Callback ‘addresses’ as C string is fragile; prefer FFIArray of FFIStringPassing addresses as a single const char* is ambiguous (encoding, delimiters, null bytes). Consider:
- Replace with
FFIArrayofFFIStringfor both TransactionCallback and MempoolTransactionCallback.- Alternatively, provide an additional API variant that uses typed arrays, keeping the string variant for backward compatibility.
This improves safety and internationalization.
488-491: Global last-error API is not thread-safe; prefer function-scoped FFIResultGiven multi-threaded callbacks and async sync flows,
dash_spv_ffi_get_last_error/clear_erroris brittle. Prefer returningFFIResult(or an error code + out-FFIString) from APIs instead of relying on thread-local/global error buffers. This avoids data races and error misattribution across threads.dash-spv/src/sync/mod.rs (1)
137-145: Don’t silently discard timeout errors from sub-sync managersSwallowing results with let _ = hides actionable errors and complicates incident triage. Propagate or at least log errors with component context.
- let _ = self.header_sync.check_sync_timeout(storage, network).await; - let _ = self.filter_sync.check_sync_timeout(storage, network).await; - let _ = self.masternode_sync.check_sync_timeout(storage, network).await; + if let Err(e) = self.header_sync.check_sync_timeout(storage, network).await { + tracing::warn!("Header sync timeout check failed: {e}"); + } + if let Err(e) = self.filter_sync.check_sync_timeout(storage, network).await { + tracing::warn!("Filter sync timeout check failed: {e}"); + } + if let Err(e) = self.masternode_sync.check_sync_timeout(storage, network).await { + tracing::warn!("Masternode sync timeout check failed: {e}"); + }dash-spv/src/chain/checkpoints.rs (2)
582-588: Update fork-rejection test for the new last checkpointWith a last checkpoint at height 2,300,000, a fork at 2,000,000 should now be rejected (<= last checkpoint). Adjust the test to assert a fork after the last checkpoint is not rejected.
- // Should not reject fork after last checkpoint - assert!(!manager.should_reject_fork(2000000)); + // Should not reject fork after last checkpoint + assert!(!manager.should_reject_fork(2_300_001));
595-600: Update ML checkpoint expectation to reflect the latest with masternode listSince height 2,300,000 includes an ML tag, it should be the last checkpoint with a masternode list.
- assert_eq!(ml_checkpoint.expect("Should have ML checkpoint").height, 1900000); + assert_eq!(ml_checkpoint.expect("Should have ML checkpoint").height, 2_300_000);swift-dash-core-sdk/Sources/SwiftDashCoreSDK/Core/SPVClient.swift (2)
1171-1205: Fix retained EventCallbackHolder leakYou retain the EventCallbackHolder userData with Unmanaged.passRetained but never release it. This leaks across the client lifecycle. Keep a raw pointer reference and release it on stop/deinit after unregistering callbacks (if available).
Minimal fix:
- private var eventCallbacksSet = false + private var eventCallbacksSet = false + private var eventCallbackUserData: UnsafeMutableRawPointer? @@ - let userData = Unmanaged.passRetained(eventHolder).toOpaque() + let userData = Unmanaged.passRetained(eventHolder).toOpaque() + self.eventCallbackUserData = userData @@ let result = dash_spv_ffi_client_set_event_callbacks(client, callbacks)Then release on teardown:
deinit { @@ - if eventCallbackHolder != nil { - // The userData was retained, so we need to release it - // Note: This is only needed if client is destroyed before callbacks complete - } + if let ptr = eventCallbackUserData { + Unmanaged<EventCallbackHolder>.fromOpaque(ptr).release() + eventCallbackUserData = nil + }Optionally, also release on stop() after clearing callbacks if the FFI supports a “clear callbacks” call.
92-103: Use explicitlengthfield for FFIString conversion
We’ve confirmed thatFFIStringdeclares its size field aslength, notlen. Update the conversion in SPVClient to read fromstage_message.lengthso we avoid any UB or over-reads.• File:
swift-dash-core-sdk/Sources/SwiftDashCoreSDK/Core/SPVClient.swift
Location:internal init(ffiProgress: FFIDetailedSyncProgress)- self.stageMessage = String(cString: ffiProgress.stage_message.ptr) + // Convert FFI string using explicit length to avoid reliance on null-termination + let rawPtr = UnsafeRawPointer(ffiProgress.stage_message.ptr) + let length = Int(ffiProgress.stage_message.length) + self.stageMessage = String( + bytesNoCopy: UnsafeMutableRawPointer(mutating: rawPtr), + length: length, + encoding: .utf8, + freeWhenDone: false + ) ?? ""
♻️ Duplicate comments (6)
dash-spv-ffi/src/platform_integration.rs (1)
23-27: Thread-local error storage is unsafe for FFIUsing
dash_spv_ffi_get_last_error()to get the error message relies on thread-local storage, which is unsafe in multi-threaded FFI contexts. The error might be set on one thread but read from another, leading to lost errors or race conditions.Consider returning the error message directly in the FFIResult struct or using a thread-safe error reporting mechanism:
fn error(code: FFIErrorCode, message: &str) -> Self { - set_last_error(message); + // Use CString for the error message directly + let c_message = std::ffi::CString::new(message) + .unwrap_or_else(|_| std::ffi::CString::new("Error message contains null byte").unwrap()); + let c_ptr = c_message.into_raw(); // Transfer ownership to caller FFIResult { error_code: code as i32, - error_message: crate::dash_spv_ffi_get_last_error(), + error_message: c_ptr, } }Note: The caller will need to free the error_message pointer when done.
dash-spv/src/storage/disk.rs (3)
131-140: Good extraction of sentinel header creationThe sentinel header has been properly extracted into a standalone function, addressing the previous code duplication issue. The invalid values ensure it cannot be mistaken for a valid block.
828-831: Correct valid_count tracking logicThe valid_count update logic now correctly ensures it represents contiguous valid headers without gaps by only incrementing when offset equals the current valid_count.
1019-1029: Sentinel headers correctly excluded from disk persistenceThe implementation properly skips sentinel headers when saving to disk, ensuring only valid headers are persisted.
dash-spv-ffi/src/error.rs (1)
6-7: Thread-local error storage remains problematic for multi-threaded FFIWhile the code uses a global Mutex instead of thread-local storage, the comment incorrectly states it's "thread-local". More importantly, this approach still has issues in FFI contexts where errors may be retrieved from different threads than where they were set.
The current implementation stores the last error globally, but FFI calls from different threads could overwrite each other's errors. Consider:
- Thread A calls FFI function that sets an error
- Thread B calls FFI function that sets a different error
- Thread A retrieves the error, but gets Thread B's error instead
A safer approach would be to return errors directly through the FFI interface using out parameters or result structs.
dash-spv/src/sync/sequential/mod.rs (1)
1133-1150: Empty locator vector issue persistsAs noted in the past review comments, when no headers are stored, the code returns an empty locator vector which causes inefficient syncing from genesis. This should use the genesis hash as the starting point.
🧹 Nitpick comments (53)
dash-spv-ffi/src/client.rs (1)
160-160: Performance improvement: Increased worker threads from 1 to 4.The increase in worker threads should improve concurrent task handling on iOS. However, consider the following:
- iOS devices have varying CPU core counts (2-6 cores)
- More threads may increase battery consumption
Consider making this configurable or adaptive based on the device capabilities.
- .worker_threads(4) // Use 4 threads for better performance on iOS + // Adaptive thread count: min 2, max 4, based on available cores + .worker_threads(std::cmp::min(4, std::cmp::max(2, num_cpus::get())))Note: This would require adding the
num_cpuscrate as a dependency.dash-spv/src/sync/sequential/phases.rs (1)
344-350: Consider making efficiency thresholds configurable.The hardcoded efficiency thresholds (70% for "High Efficiency", 30% for "Standard Efficiency") might need adjustment based on network conditions or user preferences.
+/// Efficiency thresholds for QRInfo ratio categorization +const HIGH_EFFICIENCY_THRESHOLD: f64 = 70.0; +const STANDARD_EFFICIENCY_THRESHOLD: f64 = 30.0; + let efficiency = if qr_info_ratio > 70.0 { "High Efficiency" - } else if qr_info_ratio > 30.0 { + } else if qr_info_ratio > STANDARD_EFFICIENCY_THRESHOLD { "Standard Efficiency" } else { "Targeted Sync" };dash-spv-ffi/src/platform_integration.rs (2)
2-4: Consider grouping related imports togetherThe imports are currently mixed. Consider organizing them for better readability - group dashcore imports together.
use crate::{set_last_error, FFIDashSpvClient, FFIErrorCode}; -use dashcore::hashes::Hash; -use dashcore::sml::llmq_type::LLMQType; -use dashcore::QuorumHash; +use dashcore::{ + hashes::Hash, + sml::llmq_type::LLMQType, + QuorumHash, +};
131-133: Simplify slice conversionThe manual array copy can be replaced with a more idiomatic approach.
- let quorum_hash_bytes = std::slice::from_raw_parts(quorum_hash, 32); - let mut hash_array = [0u8; 32]; - hash_array.copy_from_slice(quorum_hash_bytes); + let hash_array: [u8; 32] = std::slice::from_raw_parts(quorum_hash, 32) + .try_into() + .expect("slice with incorrect length");qr_info_spv_plan/PHASE_1.md (1)
346-346: Missing language specifier for fenced code blockThe code block should specify a language for proper syntax highlighting.
-``` +```rust // The existing storage layer already provides what we need:dash-spv/PHASE_4_IMPLEMENTATION.md (1)
136-138: Address the BLS signature verification placeholderThe document mentions that BLS signature verification is currently a placeholder. This is a critical security component that should be prioritized.
Would you like me to help implement the actual BLS signature verification logic using the dashcore cryptographic primitives? This would involve:
- Verifying the signature against the quorum public key
- Ensuring proper message formatting for chain lock validation
- Adding appropriate error handling for invalid signatures
dash-spv/src/storage/disk.rs (1)
1127-1130: Consider caching sync_base_height to avoid repeated lookupsThe code loads chain state on every
store_headerscall to getsync_base_height. This could be inefficient for frequent header storage operations.Consider caching the sync_base_height value in the DiskStorageManager struct:
pub struct DiskStorageManager { base_path: PathBuf, + // Cached sync base height for blockchain height calculations + sync_base_height: Arc<RwLock<u32>>, // ... rest of the fields }Then update it when chain state is stored/loaded and use the cached value in store_headers operations.
Also applies to: 1191-1193
dash-spv-ffi/tests/test_platform_integration_minimal.rs (1)
6-18: Strengthen null-pointer test coverage and assert out param stability on errorGood minimalist sanity test. Two small improvements:
- Assert that out_height remains unchanged when the function returns an error.
- Also cover the null out_height pointer path explicitly.
Suggested changes:
- let mut height: u32 = 0; - let result = - ffi_dash_spv_get_platform_activation_height(ptr::null_mut(), &mut height as *mut u32); - assert_eq!(result.error_code, FFIErrorCode::NullPointer as i32); + // Height should remain unchanged on error + let mut height: u32 = 42; + let result = + ffi_dash_spv_get_platform_activation_height(ptr::null_mut(), &mut height as *mut u32); + assert_eq!(result.error_code, FFIErrorCode::NullPointer as i32); + assert_eq!(height, 42); + + // Null out pointer should also produce a NullPointer error + let result2 = ffi_dash_spv_get_platform_activation_height(ptr::null_mut(), ptr::null_mut()); + assert_eq!(result2.error_code, FFIErrorCode::NullPointer as i32);swift-dash-core-sdk/Sources/DashSPVFFI/include/dash_spv_ffi.h (3)
147-147: Define a typed enum for mempool removal reasons
uint8_t reasonlacks semantic clarity and invites mismatches. Introduce an enum for reasons (e.g., Replaced, Expired, Conflict, SizeLimit, NodeShutdown), and use it in the callback signature.Example (header-only):
+typedef enum FFIMempoolRemovalReason { + RemovalReplaced = 0, + RemovalExpired = 1, + RemovalConflict = 2, + RemovalSizeLimit = 3, + RemovalOther = 255, +} FFIMempoolRemovalReason; -typedef void (*MempoolRemovedCallback)(const uint8_t (*txid)[32], uint8_t reason, void *user_data); +typedef void (*MempoolRemovedCallback)(const uint8_t (*txid)[32], + enum FFIMempoolRemovalReason reason, + void *user_data);
519-524: Introduce constants for fixed sizes (hash/pubkey) to prevent mismatchesInstead of documenting sizes verbally, define constants/macros clients can reuse:
+#define DASH_SPV_HASH32_SIZE 32 +#define DASH_SPV_BLS_PUBKEY_SIZE 48 +#define DASH_SPV_BLS_SIG_SIZE 96 /* Documentation ... */ /* - quorum_hash must point to a 32-byte array */ /* - out_pubkey_size must be at least 48 bytes */This reduces magic numbers and consumer mistakes.
598-599: Add binary txid variant for outpoint watch to avoid hex parsing overhead
dash_spv_ffi_watch_item_outpoint(const char *txid, uint32_t vout)forces hex parsing and is error-prone. Suggest adding:struct FFIWatchItem *dash_spv_ffi_watch_item_outpoint_bin(const uint8_t (*txid)[32], uint32_t vout);to align with other callbacks that already use
[32]binary hashes.dash-spv/src/sync/discovery.rs (1)
23-40: Enforce basic invariants in QRInfoRequest::new (heights and hashes)Add light-weight validation (debug or Result-based) to ensure:
- base_height <= tip_height
- hashes are not all-zero (unless you explicitly allow sentinel values)
Minimal change keeping API:
pub fn new( @@ ) -> Self { - Self { + debug_assert!(base_height <= tip_height, "base_height must be <= tip_height"); + // If zero hashes are allowed, drop these asserts. + // debug_assert!(base_hash != BlockHash::all_zeros(), "base_hash must be non-zero"); + // debug_assert!(tip_hash != BlockHash::all_zeros(), "tip_hash must be non-zero"); + Self { base_height, tip_height, base_hash, tip_hash, extra_share, } }If you prefer hard guarantees, consider a
try_new(...) -> Result<Self, Error>constructor.dash-spv/src/network/tests.rs (1)
116-174: Prefer builder/defaults for ClientConfig in tests to reduce churnThe exhaustive struct literal is brittle when fields are added. Consider starting from
ClientConfig::default()and override only necessary fields, including the newly added QRInfo fields. This keeps tests resilient to config growth.dash-spv/tests/qrinfo_integration_test.rs (1)
17-17: Remove or use the unused parameter_base_heightThe
_base_heightparameter is never used. Either use it for constructing base hashes or remove it to avoid confusion.-fn create_test_qr_info(_base_height: u32, tip_height: u32) -> QRInfo { +fn create_test_qr_info(tip_height: u32) -> QRInfo {Note: Update call sites accordingly.
dash-spv/tests/phase2_integration_test.rs (2)
83-88: Make priority ordering assertion less brittleThe test assumes requests are returned in non-increasing priority order. Unless the API guarantees sorted output, this can be flaky. Consider sorting locally before asserting or relaxing the invariant to a set-based property.
Example adjustment:
- if requests.len() > 1 { - for window in requests.windows(2) { - assert!(window[0].priority >= window[1].priority); - } - } + if requests.len() > 1 { + let mut priorities: Vec<_> = requests.iter().map(|r| r.priority).collect(); + let mut sorted = priorities.clone(); + sorted.sort_by(|a, b| b.cmp(a)); // desc + assert_eq!(priorities, sorted, "planner should return requests sorted by priority"); + }
155-161: Avoid magic thresholds in network-dependent testsAsserting avg_batch_size <= 3.0 under “poor” conditions can be brittle if batching heuristics evolve. Prefer relative assertions (e.g., smaller than under neutral conditions) to avoid future false negatives.
dash-spv/src/sync/mod.rs (1)
56-66: Surface initialization failures with more contextGood: Reorg-aware header sync with ReorgConfig::default. Consider including reorg_config values in the error to aid diagnostics when HeaderSyncManagerWithReorg::new fails.
dash-spv/tests/phase3_integration_test.rs (1)
40-56: Correlation manager test is minimal; consider verifying completion pathYou validate registration and non-expiration; adding a responder simulation (sending a matched response and asserting pending_count() decrements) would strengthen coverage of the happy path.
dash-spv/src/validation/test_summary.md (1)
58-60: Tighten wording per style hint (“very” overuse)Replace “Very large chains” with “Large chains” to avoid weak intensifiers.
- - Very large chains (1000+ headers) are handled + - Large chains (1000+ headers) are handledswift-dash-core-sdk/Sources/SwiftDashCoreSDK/Core/SPVClient.swift (2)
1273-1283: Avoid noisy logs in the per-tick progress wrapperSame as above; these logs fire per progress tick. Consider DEBUG gating to avoid overhead.
- print("🟣 FFI progress callback wrapper called") + #if DEBUG + print("🟣 FFI progress callback wrapper called") + #endif @@ - print("🟣 Converting FFI progress to Swift DetailedSyncProgress") + #if DEBUG + print("🟣 Converting FFI progress to Swift DetailedSyncProgress") + #endif @@ - print("🟣 Failed to cast progress to FFIDetailedSyncProgress") + #if DEBUG + print("🟣 Failed to cast progress to FFIDetailedSyncProgress") + #endif
451-456: Use FFI-provided totals consistently (avoid recomputing)In eventBalanceCallback you compute total = confirmed + unconfirmed, while other paths use ffiBalance.total. Prefer consistent use of the source of truth from FFI.
- let balance = Balance( - confirmed: confirmed, - pending: unconfirmed, - instantLocked: 0, // InstantLocked amount not provided in callback - total: confirmed + unconfirmed - ) + let balance = Balance( + confirmed: confirmed, + pending: unconfirmed, + instantLocked: 0, + total: confirmed + unconfirmed // or replace with FFI total if available in callback + )If the callback can include a total field in the future, switch to that to avoid divergence.
Also applies to: 933-940
dash-spv/QRINFO_INTEGRATION_COMPLETE.md (2)
50-74: Specify a language on fenced code blocks (linter MD040)Add a language identifier for the ASCII diagram to satisfy markdownlint and improve rendering.
-``` +```text ┌─────────────────────────────────────────────────────────┐ │ Network Layer │ ... └─────────────────────────────────────────────────────────┘--- `80-94`: **Add explicit references or links for “Next Steps” items** These are actionable; linking to tracking issues or TODOs (e.g., extract_chain_lock_from_coinbase()) helps follow-through and ownership. I can open issues and scaffold unit tests/benchmarks for QRInfo processing if you want. </blockquote></details> <details> <summary>dash-spv/src/sync/sequential/transitions.rs (1)</summary><blockquote> `107-118`: **Consider additional validation for phase transitions.** While the logic for skipping to `FullySynced` when no peers support filters is correct, consider adding a comment explaining why we don't require cfheaders to be complete in this specific case, as it might not be immediately obvious to future maintainers. ```diff SyncPhase::FullySynced { .. } => { // Allow skipping to FullySynced if no peers support filters - // Don't require cfheaders to be complete in this case + // Don't require cfheaders to be complete in this case because we're + // bypassing the entire filter download phase when no peers can provide them Ok(true) }dash-spv/src/sync/validation_test.rs (3)
27-59: Mock data generators could be more robust.While the mock creation functions work for basic testing, consider making them more flexible:
- fn create_mock_mn_list_diff(_height: u32) -> MnListDiff { + fn create_mock_mn_list_diff(height: u32) -> MnListDiff { MnListDiff { version: 1, - base_block_hash: BlockHash::all_zeros(), - block_hash: BlockHash::from([0; 32]), + base_block_hash: if height > 0 { + BlockHash::from([(height - 1) as u8; 32]) + } else { + BlockHash::all_zeros() + }, + block_hash: BlockHash::from([height as u8; 32]), total_transactions: 1,This would make the mock data more realistic and help catch edge cases.
163-179: Add assertions for edge cases in consistency validation.The consistency test is good but could be more thorough:
#[tokio::test] async fn test_validation_state_consistency() { let mut manager = ValidationStateManager::new(); // Set valid state manager.update_sync_height(100); manager.current_state_mut().last_validated_height = 50; // Should pass consistency check assert!(manager.validate_consistency().is_ok()); // Set invalid state manager.current_state_mut().last_validated_height = 200; // Should fail consistency check assert!(manager.validate_consistency().is_err()); + + // Test edge case: equal heights should be valid + manager.current_state_mut().last_validated_height = 100; + assert!(manager.validate_consistency().is_ok()); + + // Test pending validation too far ahead + manager.add_pending_validation(2000, ValidationType::ChainLock); + assert!(manager.validate_consistency().is_err()); }
217-243: Performance test structure is good, but consider actual benchmarking.The performance test framework is well-structured but currently doesn't perform actual validation. Consider adding timing assertions:
let duration = start.elapsed(); println!("Created test data in {:?}", duration); + + // Assert reasonable performance bounds + assert!(duration < Duration::from_secs(1), + "Test data creation took too long: {:?}", duration);dash-spv-ffi/tests/test_platform_integration_safety.rs (2)
117-159: Consider adding overflow test for buffer size.While the buffer size validation tests are good, consider adding a test for integer overflow scenarios:
+ // Test 4: Integer overflow in buffer size + let overflow_size = usize::MAX; + let result = ffi_dash_spv_get_quorum_public_key( + mock_client, + 0, + quorum_hash.as_ptr(), + 0, + ptr::null_mut(), + overflow_size, + ); + assert!(result.error_code != 0, "Should fail with overflow size");
63-67: Document rationale for double-free test placeholder.The comment mentions double-free prevention but doesn't include actual test code. Consider either implementing the test or explaining why it's deferred:
// Test 2: Double-free prevention // In a real implementation with a valid handle: // let handle = create_valid_handle(); // ffi_dash_spv_release_core_handle(handle); // ffi_dash_spv_release_core_handle(handle); // Should be safe + // TODO: Implement when mock handle creation is available + // This will test that the FFI layer properly tracks freed handlesdash-spv/src/client/mod.rs (1)
1974-1990: Consider extracting the nearby heights logic to a helper method.The logic for finding nearby available heights is useful and could be reused:
+ /// Helper to get available masternode list heights near a target height + fn get_nearby_available_heights(&self, target_height: u32, range: u32) -> Vec<u32> { + if let Some(engine) = self.masternode_list_engine() { + engine.masternode_lists + .keys() + .filter(|&&h| { + h > target_height.saturating_sub(range) && + h < target_height.saturating_add(range) + }) + .copied() + .collect() + } else { + Vec::new() + } + } // In get_quorum_at_height: - let available_heights: Vec<u32> = engine - .masternode_lists - .keys() - .filter(|&&h| { - h > height.saturating_sub(100) && h < height.saturating_add(100) - }) - .copied() - .collect(); + let available_heights = self.get_nearby_available_heights(height, 100);This would make the code more maintainable and the logic reusable.
qr_info_spv_plan/SUMMARY.md (2)
32-37: Add language specifiers to fenced code blocks.For better syntax highlighting and clarity:
-``` +```text dash-spv decides height → GetMnListDiff(A,B) → wait → GetMnListDiff(B,C) → repeat-
+text
engine identifies needs → QRInfo request → batch processing → validation ready--- `113-129`: **Risk assessment is thorough and practical.** The identified risks and mitigations are well thought out. The emphasis on test vectors from real network data for QRInfo protocol implementation is particularly important given the complexity of the message format. Consider adding one more risk item: - **Rollback scenarios**: Complex state recovery during sync failures - *Mitigation*: Implement checkpointing and state snapshots for recovery </blockquote></details> <details> <summary>dash-spv/src/sync/chainlock_validation.rs (1)</summary><blockquote> `8-16`: **Remove unused imports** The `BLSPublicKey` and `BLSSignature` imports from `bls_sig_utils` are not used in the code. ```diff -use dashcore::{ - bls_sig_utils::{BLSPublicKey, BLSSignature}, - sml::{llmq_type::LLMQType, masternode_list_engine::MasternodeListEngine}, - BlockHash, ChainLock, -}; +use dashcore::{ + sml::{llmq_type::LLMQType, masternode_list_engine::MasternodeListEngine}, + BlockHash, ChainLock, +};dash-spv/src/sync/validation.rs (1)
307-308: Missing merkle root validationThe TODO comment indicates merkle root validation is not implemented. This is a critical security check for masternode list integrity.
Would you like me to help implement the merkle root validation logic for masternode list diffs?
dash-spv/src/sync/masternodes_old.rs (2)
189-195: Validation engine integration incompleteThe validation engine integration is commented out with a TODO. Since the validation engine is initialized in the constructor when validation is enabled, this integration should be completed.
Would you like me to help implement the validation engine integration for masternode diff processing?
379-390: Placeholder method delegates to simple implementationThe
execute_engine_driven_syncmethod is described as a placeholder but simply delegates tostart_sync. Consider either implementing the engine-driven logic or removing this method until it's needed.dash-spv/src/sync/validation_state.rs (2)
358-366: Hardcoded stale timeout valueThe stale timeout is hardcoded to 5 minutes. Consider making this configurable to allow different timeout values for different network conditions.
+/// Default timeout for pending validations +const DEFAULT_STALE_TIMEOUT_SECS: u64 = 300; // 5 minutes + pub struct ValidationStateManager { /// Current state current_state: ValidationState, /// State snapshots for rollback snapshots: VecDeque<StateSnapshot>, /// Maximum age for snapshots snapshot_ttl: Duration, + /// Timeout for pending validations + stale_timeout: Duration, /// State change listeners change_listeners: Vec<Box<dyn Fn(&ValidationState) + Send>>, } impl ValidationStateManager { /// Create a new validation state manager pub fn new() -> Self { Self { current_state: ValidationState::default(), snapshots: VecDeque::new(), snapshot_ttl: Duration::from_secs(3600), // 1 hour + stale_timeout: Duration::from_secs(DEFAULT_STALE_TIMEOUT_SECS), change_listeners: Vec::new(), } } // In validate_consistency method: - let stale_timeout = Duration::from_secs(300); // 5 minutes for (height, pending) in &self.current_state.pending_validations { - if now.duration_since(pending.queued_at) > stale_timeout { + if now.duration_since(pending.queued_at) > self.stale_timeout {
381-387: Trait object storage for listenersThe use of
Box<dyn Fn(&ValidationState) + Send>for listeners is appropriate for type erasure, but note that this prevents removing specific listeners later. If listener management becomes necessary, consider using an ID-based system.PLAN.md (2)
26-43: Avoid hard-coded absolute paths in documentation snippetsThe code snippets embed absolute local paths (e.g., /Users/quantum/src/...). This will confuse readers and break copy-paste usability across environments. Use repository-relative paths instead.
190-206: Clarify “genesis vs zero-hash” semantics in docsSeveral parts assume the all-zeros 32-byte hash is “genesis.” In Dash, the all-zeros hash is used as a sentinel (e.g., for a genesis base in diffs), but it is not the actual genesis hash. Please adjust wording to “zero-hash sentinel” to avoid ambiguity when readers map behavior to core.
dash-spv/src/sync/headers_with_reorg.rs (3)
253-279: Dropping non-connecting batches blindly may hinder recoveryIf a batch doesn’t connect to tip, the current approach drops it and relies on timeouts. Consider immediate fallback: request headers starting from the batch’s first.prev_blockhash to realign faster, or trigger a targeted recovery request to avoid stalls with sluggish peers.
Would you like a small patch to attempt an immediate recovery request when this occurs?
687-709: Remove unreachable code after early return
handle_headers2_messagereturns Err immediately, but retains a large block of unreachable code below. This can confuse readers and may trigger warnings.Apply this diff to keep only the early return and a TODO:
tracing::warn!( "⚠️ Headers2 support is currently NON-FUNCTIONAL. Received {} compressed headers from peer {} but cannot process them.", headers2.headers.len(), peer_id ); // Mark headers2 as failed for this session to avoid retrying self.headers2_failed = true; // Return an error to trigger fallback to regular headers - return Err(SyncError::Headers2DecompressionFailed( + return Err(SyncError::Headers2DecompressionFailed( "Headers2 is currently disabled due to protocol compatibility issues".to_string(), )); - // ... unreachable implementation followsWhen re-enabling, reintroduce the decompression path guarded behind a feature flag or a runtime switch.
1114-1186: Checkpoint prepopulation still a stub
prepopulate_from_checkpointslogs and counts but does not write headers. If this is intentionally deferred, consider adding a FIXME with reference to a tracking issue, or implement the batch write when the storage API is ready.If you want, I can draft a storage-facing batch write that leverages
store_headersin reasonable groups and validates hashes against checkpoints before insertion.dash/src/sml/masternode_list_engine/mod.rs (2)
75-85: Docstring and behavior: zero-hash sentinel vs genesisThe docs state “Returns Some(0) for the genesis block (all zeros hash).” The all-zeros 32-byte value is a sentinel in diffs, not the actual genesis hash. Consider rewording to “zero-hash sentinel” to avoid semantic confusion.
Apply this diff:
- /// The block height if found, or `None` if not in the container. - /// Returns `Some(0)` for the genesis block (all zeros hash). + /// The block height if found, or `None` if not in the container. + /// Returns `Some(0)` for the zero-hash sentinel used by genesis-base diffs.
84-89: Stray comment and minor readability nitThe inline comment
// repis unclear. Remove or clarify.Apply this diff:
- if block_hash.as_byte_array() == &[0; 32] { - // rep + if block_hash.as_byte_array() == &[0; 32] { Some(0)qr_info_spv_plan/PHASE_4.md (4)
463-471: Missing imports and cache type for ChainLockValidator.Add missing imports used by the snippet (BTreeMap, BlockHash, QRInfo).
use dashcore::chain::chainlock::ChainLock; use dashcore::sml::masternode_list_engine::message_request_verification::ChainLockVerificationExt; +use dashcore::{BlockHash, network::message_qrinfo::QRInfo}; +use std::collections::BTreeMap;
596-606: Not an LRU cache; BTreeMap + keys().next() evicts smallest key, not least-recently-used.Either rename to "bounded cache" or implement true LRU (e.g., IndexMap with move_to_front on access).
I can provide a minimal LRU wrapper based on linked hash ordering if you want the snippet to reflect true LRU semantics.
618-620: Missing import for Instant in validation_state.rs snippet.Instant is used in ValidationCheckpoint.
use dashcore::sml::masternode_list_engine::MasternodeListEngine; use std::collections::BTreeMap; +use std::time::Instant;
792-799: Incorrect import for tokio test attribute; missing Instant for tests.use tokio::test; is unnecessary/incorrect; add Instant for timing.
-use tokio::test; +use std::time::Instant;Note: The #[tokio::test] attributes are sufficient; no import needed.
qr_info_spv_plan/PHASE_3.md (4)
49-63: Missing imports for concurrency and timing primitives.These are used throughout the snippet.
use tokio::{sync::Semaphore, task::JoinSet, time::timeout}; -use std::sync::Arc; +use std::sync::Arc; +use tokio::sync::{mpsc, Mutex}; +use std::sync::atomic::{AtomicUsize, Ordering}; +use std::time::{Duration, Instant};
438-442: Logging with {:x} for BlockHash may fail.Prefer {:?} unless LowerHex is implemented.
- "Registered QRInfo request {} for range {:x} to {:x}", + "Registered QRInfo request {} for range {:?} to {:?}",
580-583: Adjust tests to new oneshot channel Result type.After changing the channel to carry Result<QRInfo, CorrelationError>, validate Ok on success and Err on timeout.
- let received_qr_info = response_rx.await.unwrap(); + let received_qr_info = response_rx.await.unwrap().unwrap(); @@ - let result = response_rx.await; - assert!(result.is_err()); // Channel was closed due to timeout + let result = response_rx.await.unwrap(); + assert!(matches!(result, Err(CorrelationError::Timeout)));Also applies to: 645-647
883-895: RateLimiter race window and backpressure coupling.can_make_request() and record_request() aren’t atomic together; concurrent calls can oversubscribe tokens. Consider using a single async method acquire_token(). Also record_request awaiting refill blocks scheduling loop; prefer decoupled refill task.
I can draft a token-bucket with a tokio::sync::Semaphore to model tokens and a background refill task, which simplifies the concurrency.
Also applies to: 705-733
| let pubkey_ptr = public_key as *const _ as *const u8; | ||
| std::ptr::copy_nonoverlapping(pubkey_ptr, out_pubkey, QUORUM_PUBKEY_SIZE); |
There was a problem hiding this comment.
Validate public key size before pointer arithmetic
The code assumes public_key is at least 48 bytes without validation. This could lead to undefined behavior if the data structure changes.
Add a size check before the pointer cast:
// Copy the public key directly from the global index
+ // Ensure the public key is the expected size
+ if std::mem::size_of_val(public_key) < QUORUM_PUBKEY_SIZE {
+ return FFIResult::error(
+ FFIErrorCode::RuntimeError,
+ "Internal error: Public key size mismatch",
+ );
+ }
let pubkey_ptr = public_key as *const _ as *const u8;
std::ptr::copy_nonoverlapping(pubkey_ptr, out_pubkey, QUORUM_PUBKEY_SIZE);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let pubkey_ptr = public_key as *const _ as *const u8; | |
| std::ptr::copy_nonoverlapping(pubkey_ptr, out_pubkey, QUORUM_PUBKEY_SIZE); | |
| // Copy the public key directly from the global index | |
| + // Ensure the public key is the expected size | |
| + if std::mem::size_of_val(public_key) < QUORUM_PUBKEY_SIZE { | |
| + return FFIResult::error( | |
| + FFIErrorCode::RuntimeError, | |
| + "Internal error: Public key size mismatch", | |
| + ); | |
| + } | |
| let pubkey_ptr = public_key as *const _ as *const u8; | |
| std::ptr::copy_nonoverlapping(pubkey_ptr, out_pubkey, QUORUM_PUBKEY_SIZE); |
🤖 Prompt for AI Agents
In dash-spv-ffi/src/platform_integration.rs around lines 175 to 176, the code
casts `public_key` to a pointer and copies QUORUM_PUBKEY_SIZE bytes without
validating the source length, risking undefined behavior; before doing pointer
arithmetic, check that `public_key.len() >= QUORUM_PUBKEY_SIZE` (or equivalent
size field) and return an error or early return if too small, then perform the
copy using a safe method (e.g., take a slice and use its as_ptr() or
copy_from_slice) so the copy only proceeds when the source buffer is large
enough.
| let result = handler.handle_message(NetworkMessage::QRInfo(qr_info.clone())).await; | ||
|
|
||
| match result { | ||
| MessageHandleResult::QRInfo(received) => { | ||
| assert_eq!( | ||
| received.last_commitment_per_index.len(), | ||
| qr_info.last_commitment_per_index.len() | ||
| ); | ||
| } | ||
| _ => panic!("Expected QRInfo result"), | ||
| } | ||
|
|
||
| // Check stats were updated | ||
| assert_eq!(handler.stats().qrinfo_messages, 1); | ||
| assert_eq!(handler.stats().messages_received, 1); | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Make the QRInfo message test assert non-trivial payload
As written, last_commitment_per_index is empty, so the equality check is vacuous. Either populate the field in create_test_qr_info() or push an element in the test before sending to ensure the handler passes the payload through.
Example adjustment in the test:
- let qr_info = create_test_qr_info();
+ let mut qr_info = create_test_qr_info();
+ qr_info.last_commitment_per_index.push(
+ QuorumEntry {
+ version: 1,
+ llmq_type: LLMQType::Llmqtype50_60,
+ quorum_hash: dashcore::QuorumHash::all_zeros(),
+ quorum_index: None,
+ signers: vec![true; 1],
+ valid_members: vec![true; 1],
+ quorum_public_key: dashcore::bls_sig_utils::BLSPublicKey::from([0; 48]),
+ quorum_vvec_hash: dashcore::hash_types::QuorumVVecHash::all_zeros(),
+ threshold_sig: dashcore::bls_sig_utils::BLSSignature::from([0; 96]),
+ all_commitment_aggregated_signature: dashcore::bls_sig_utils::BLSSignature::from([0; 96]),
+ }
+ );This validates actual data flow, not just the presence of a branch.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let result = handler.handle_message(NetworkMessage::QRInfo(qr_info.clone())).await; | |
| match result { | |
| MessageHandleResult::QRInfo(received) => { | |
| assert_eq!( | |
| received.last_commitment_per_index.len(), | |
| qr_info.last_commitment_per_index.len() | |
| ); | |
| } | |
| _ => panic!("Expected QRInfo result"), | |
| } | |
| // Check stats were updated | |
| assert_eq!(handler.stats().qrinfo_messages, 1); | |
| assert_eq!(handler.stats().messages_received, 1); | |
| } | |
| // Prepare a non-empty QRInfo payload | |
| - let qr_info = create_test_qr_info(); | |
| + let mut qr_info = create_test_qr_info(); | |
| + qr_info.last_commitment_per_index.push( | |
| + QuorumEntry { | |
| + version: 1, | |
| + llmq_type: LLMQType::Llmqtype50_60, | |
| + quorum_hash: dashcore::QuorumHash::all_zeros(), | |
| + quorum_index: None, | |
| + signers: vec![true; 1], | |
| + valid_members: vec![true; 1], | |
| + quorum_public_key: dashcore::bls_sig_utils::BLSPublicKey::from([0; 48]), | |
| + quorum_vvec_hash: dashcore::hash_types::QuorumVVecHash::all_zeros(), | |
| + threshold_sig: dashcore::bls_sig_utils::BLSSignature::from([0; 96]), | |
| + all_commitment_aggregated_signature: dashcore::bls_sig_utils::BLSSignature::from([0; 96]), | |
| + } | |
| + ); | |
| let result = handler.handle_message(NetworkMessage::QRInfo(qr_info.clone())).await; | |
| match result { | |
| MessageHandleResult::QRInfo(received) => { | |
| assert_eq!( | |
| received.last_commitment_per_index.len(), | |
| qr_info.last_commitment_per_index.len() | |
| ); | |
| } | |
| _ => panic!("Expected QRInfo result"), | |
| } | |
| // Check stats were updated | |
| assert_eq!(handler.stats().qrinfo_messages, 1); | |
| assert_eq!(handler.stats().messages_received, 1); | |
| } |
🤖 Prompt for AI Agents
In dash-spv/src/network/tests.rs around lines 52 to 67, the QRInfo test
currently asserts equality on last_commitment_per_index when that field is
empty, making the check vacuous; update the test to provide a non-trivial
payload by either modifying create_test_qr_info() to populate
last_commitment_per_index with one or more entries or by pushing an element into
qr_info.last_commitment_per_index in the test before sending the message, then
assert the handler returned the same entries (e.g., check length > 0 and compare
the inserted element(s)) and keep the existing stats assertions.
| // TODO: This method currently expects storage-relative heights (0-based from sync_base_height). | ||
| // Consider refactoring to accept blockchain heights and handle conversion internally for better UX. | ||
|
|
There was a problem hiding this comment.
🛠️ Refactor suggestion
Consider documenting height conversion expectations
The TODO comment correctly identifies that this method expects storage-relative heights. This should be properly documented in the trait definition to avoid confusion.
Consider adding documentation to clarify the height parameter expectations:
+ /// Get a header at the specified height.
+ ///
+ /// # Arguments
+ /// * `height` - Storage-relative height (0-based from sync_base_height).
+ /// For blockchain heights, subtract sync_base_height before calling.
+ ///
+ /// # Returns
+ /// The header at the specified height, or None if not found.
async fn get_header(&self, height: u32) -> StorageResult<Option<BlockHeader>> {🤖 Prompt for AI Agents
In dash-spv/src/storage/disk.rs around lines 1288 to 1290, the TODO notes that
the method expects storage-relative heights (0-based from sync_base_height) but
this expectation is not documented; update the trait definition's doc comment
for this method to explicitly state that the height parameter is
storage-relative (0-based offset from sync_base_height), include the exact
conversion rule (storage_index = blockchain_height - sync_base_height) and an
example, and if applicable mention that callers must convert blockchain heights
before calling or consider the alternative of changing the API to accept
blockchain heights and perform conversion internally.
|
|
||
| /// Find the appropriate quorum for chain lock validation | ||
| fn find_chain_lock_quorum( | ||
| &self, | ||
| height: u32, | ||
| engine: &MasternodeListEngine, | ||
| ) -> SyncResult<Option<(BlockHash, BLSPublicKey)>> { | ||
| // ChainLocks use the LLMQ at the tip of the chain | ||
| // We need to find the most recent LLMQ of the required type | ||
|
|
||
| // Get the masternode list at or before the height | ||
| let mn_list_height = engine.masternode_lists.range(..=height).rev().next().map(|(h, _)| *h); | ||
|
|
||
| if let Some(list_height) = mn_list_height { | ||
| if let Some(mn_list) = engine.masternode_lists.get(&list_height) { | ||
| // Find the chain lock quorum | ||
| if let Some(quorums) = mn_list.quorums.get(&self.config.required_llmq_type) { | ||
| // Get the most recent quorum | ||
| if let Some((quorum_hash, entry)) = quorums.iter().next() { | ||
| // Get public key from the quorum entry | ||
| return Ok(Some((*quorum_hash, entry.quorum_entry.quorum_public_key))); | ||
| } | ||
| } | ||
| } | ||
| } | ||
|
|
||
| Ok(None) | ||
| } |
There was a problem hiding this comment.
find_chain_lock_quorum method is never used
The find_chain_lock_quorum method is defined but never called within the module. It appears the chain lock validation is now delegated to the engine's verify_chain_lock method instead. Consider removing this unused code.
- /// Find the appropriate quorum for chain lock validation
- fn find_chain_lock_quorum(
- &self,
- height: u32,
- engine: &MasternodeListEngine,
- ) -> SyncResult<Option<(BlockHash, BLSPublicKey)>> {
- // ChainLocks use the LLMQ at the tip of the chain
- // We need to find the most recent LLMQ of the required type
-
- // Get the masternode list at or before the height
- let mn_list_height = engine.masternode_lists.range(..=height).rev().next().map(|(h, _)| *h);
-
- if let Some(list_height) = mn_list_height {
- if let Some(mn_list) = engine.masternode_lists.get(&list_height) {
- // Find the chain lock quorum
- if let Some(quorums) = mn_list.quorums.get(&self.config.required_llmq_type) {
- // Get the most recent quorum
- if let Some((quorum_hash, entry)) = quorums.iter().next() {
- // Get public key from the quorum entry
- return Ok(Some((*quorum_hash, entry.quorum_entry.quorum_public_key)));
- }
- }
- }
- }
-
- Ok(None)
- }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| /// Find the appropriate quorum for chain lock validation | |
| fn find_chain_lock_quorum( | |
| &self, | |
| height: u32, | |
| engine: &MasternodeListEngine, | |
| ) -> SyncResult<Option<(BlockHash, BLSPublicKey)>> { | |
| // ChainLocks use the LLMQ at the tip of the chain | |
| // We need to find the most recent LLMQ of the required type | |
| // Get the masternode list at or before the height | |
| let mn_list_height = engine.masternode_lists.range(..=height).rev().next().map(|(h, _)| *h); | |
| if let Some(list_height) = mn_list_height { | |
| if let Some(mn_list) = engine.masternode_lists.get(&list_height) { | |
| // Find the chain lock quorum | |
| if let Some(quorums) = mn_list.quorums.get(&self.config.required_llmq_type) { | |
| // Get the most recent quorum | |
| if let Some((quorum_hash, entry)) = quorums.iter().next() { | |
| // Get public key from the quorum entry | |
| return Ok(Some((*quorum_hash, entry.quorum_entry.quorum_public_key))); | |
| } | |
| } | |
| } | |
| } | |
| Ok(None) | |
| } |
🤖 Prompt for AI Agents
In dash-spv/src/sync/chainlock_validation.rs around lines 213 to 240, the
find_chain_lock_quorum method is defined but never used; remove the entire
method and any related private references or imports so the file no longer
contains dead code, run cargo build/tests to ensure no compile errors, and
update module-level comments if they reference this removed helper.
dash-spv/src/sync/masternodes_old.rs
Outdated
| last_height: target_height, | ||
| engine_state: vec![], // TODO: Serialize engine state properly | ||
| last_update: std::time::SystemTime::now() | ||
| .duration_since(std::time::UNIX_EPOCH) | ||
| .unwrap() | ||
| .as_secs(), | ||
| terminal_block_hash: None, | ||
| }; |
There was a problem hiding this comment.
Engine state serialization not implemented
The code creates a MasternodeState with an empty engine_state vector and has TODOs for proper serialization and storage. This means the masternode state is not actually persisted.
This needs to be implemented for the sync to work correctly across restarts:
// Store the updated masternode state
- // Note: The actual engine state serialization would be done here
+ let engine_state = engine.serialize()
+ .map_err(|e| SyncError::Validation(format!("Failed to serialize engine state: {}", e)))?;
+
let state = MasternodeState {
last_height: target_height,
- engine_state: vec![], // TODO: Serialize engine state properly
+ engine_state,
last_update: std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.unwrap()
.as_secs(),
terminal_block_hash: None,
};
- // TODO: Implement proper state saving
- // storage.save_masternode_state(&state).await
- // .map_err(|e| SyncError::Storage(format!("Failed to save masternode state: {}", e)))?;
+ storage.save_masternode_state(&state).await
+ .map_err(|e| SyncError::Storage(format!("Failed to save masternode state: {}", e)))?;| // Validate each rotation cycle in last_commitment_per_index | ||
| for (index, quorum_entry) in qr_info.last_commitment_per_index.iter().enumerate() { | ||
| if !quorum_entry.llmq_type.is_rotating_quorum_type() { | ||
| continue; | ||
| } | ||
|
|
||
| match engine.validate_rotation_cycle_quorums(&[quorum_entry]) { | ||
| Ok(statuses) => { | ||
| if let Some(status) = statuses.get(&quorum_entry.quorum_hash) { | ||
| match status { | ||
| LLMQEntryVerificationStatus::Verified => verified_cycles += 1, | ||
| _ => { | ||
| failed_cycles += 1; | ||
| errors.push(ValidationError::RotationCycleInconsistent { | ||
| cycle_hash: quorum_entry.quorum_hash, | ||
| reason: format!("Rotation validation failed with status: {:?}", status), | ||
| }); | ||
| } | ||
| } |
There was a problem hiding this comment.
Type mismatch: RotationCycleInconsistent uses BlockHash but code passes quorum_hash.
quorum_entry.quorum_hash is a QuorumHash, not a BlockHash. Align the error variant and sites of construction.
Apply:
- RotationCycleInconsistent { cycle_hash: BlockHash, reason: String },
+ RotationCycleInconsistent { quorum_hash: QuorumHash, reason: String },- errors.push(ValidationError::RotationCycleInconsistent {
- cycle_hash: quorum_entry.quorum_hash,
+ errors.push(ValidationError::RotationCycleInconsistent {
+ quorum_hash: quorum_entry.quorum_hash,
reason: format!("Rotation validation failed with status: {:?}", status),
});Similarly update the Err(e) branch constructing this variant.
Also applies to: 119-125
🤖 Prompt for AI Agents
In qr_info_spv_plan/PHASE_4.md around lines 283-301 (and also update the similar
construction at lines 119-125), the code constructs
ValidationError::RotationCycleInconsistent using quorum_entry.quorum_hash (a
QuorumHash) but the enum variant currently expects a BlockHash; change the error
type and all sites that construct it so they agree: either modify
RotationCycleInconsistent to take a QuorumHash (preferred) or add a new variant
that accepts a QuorumHash, and update both the Ok(statuses) branch and the
Err(e) branch to pass the correct QuorumHash value (or convert to the expected
BlockHash only if that conversion is deliberate and implemented), ensuring all
usages and pattern matches are adjusted accordingly.
| // Verify against both potential quorums | ||
| let verification_results = vec![ | ||
| quorum_under_result.and_then(|quorum| { | ||
| quorum.verify_chain_lock_signature(&chain_lock.signature, &chain_lock.block_hash) | ||
| }), | ||
| quorum_over_result.and_then(|quorum| { | ||
| quorum.verify_chain_lock_signature(&chain_lock.signature, &chain_lock.block_hash) | ||
| }), | ||
| ]; | ||
|
|
||
| // Chain lock is valid if any quorum can verify it | ||
| Ok(verification_results.into_iter().any(|result| result.unwrap_or(false))) | ||
| } |
There was a problem hiding this comment.
Option/Result conflation in chain lock verification.
and_then on Option expects Option, but verify_chain_lock_signature likely returns Result<bool, _>. The final unwrap_or(false) also hints at confusion. Implement explicit branching.
Apply:
- // Verify against both potential quorums
- let verification_results = vec![
- quorum_under_result.and_then(|quorum| {
- quorum.verify_chain_lock_signature(&chain_lock.signature, &chain_lock.block_hash)
- }),
- quorum_over_result.and_then(|quorum| {
- quorum.verify_chain_lock_signature(&chain_lock.signature, &chain_lock.block_hash)
- }),
- ];
-
- // Chain lock is valid if any quorum can verify it
- Ok(verification_results.into_iter().any(|result| result.unwrap_or(false)))
+ // Verify against both potential quorums; valid if any can verify
+ let mut verified = false;
+ if let Some(quorum) = quorum_under_result {
+ if quorum.verify_chain_lock_signature(&chain_lock.signature, &chain_lock.block_hash).unwrap_or(false) {
+ verified = true;
+ }
+ }
+ if !verified {
+ if let Some(quorum) = quorum_over_result {
+ if quorum.verify_chain_lock_signature(&chain_lock.signature, &chain_lock.block_hash).unwrap_or(false) {
+ verified = true;
+ }
+ }
+ }
+ Ok(verified)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Verify against both potential quorums | |
| let verification_results = vec![ | |
| quorum_under_result.and_then(|quorum| { | |
| quorum.verify_chain_lock_signature(&chain_lock.signature, &chain_lock.block_hash) | |
| }), | |
| quorum_over_result.and_then(|quorum| { | |
| quorum.verify_chain_lock_signature(&chain_lock.signature, &chain_lock.block_hash) | |
| }), | |
| ]; | |
| // Chain lock is valid if any quorum can verify it | |
| Ok(verification_results.into_iter().any(|result| result.unwrap_or(false))) | |
| } | |
| // Verify against both potential quorums; valid if any can verify | |
| let mut verified = false; | |
| if let Some(quorum) = quorum_under_result { | |
| if quorum | |
| .verify_chain_lock_signature(&chain_lock.signature, &chain_lock.block_hash) | |
| .unwrap_or(false) | |
| { | |
| verified = true; | |
| } | |
| } | |
| if !verified { | |
| if let Some(quorum) = quorum_over_result { | |
| if quorum | |
| .verify_chain_lock_signature(&chain_lock.signature, &chain_lock.block_hash) | |
| .unwrap_or(false) | |
| { | |
| verified = true; | |
| } | |
| } | |
| } | |
| Ok(verified) | |
| } |
🤖 Prompt for AI Agents
In qr_info_spv_plan/PHASE_4.md around lines 552-564, the code incorrectly uses
Option::and_then with a method that returns Result<bool, _>, causing
Option/Result conflation and the subsequent unwrap_or(false) hack; replace the
and_then closures with explicit branching: for each quorum Option, if
Some(quorum) call quorum.verify_chain_lock_signature(...) and collect the
Result<bool, E>, if None treat it as Ok(false); then evaluate the collected
Results by returning Ok(true) if any Result is Ok(true), otherwise return the
first Err encountered (or Ok(false) if all are Ok(false)); ensure you propagate
errors properly instead of unwrapping or hiding them.
| // DashSPVFFI target removed - now provided by unified SDK in dashpay-ios | ||
| // Note: This package cannot build standalone - it requires the unified SDK's DashSPVFFI module |
There was a problem hiding this comment.
💡 Verification agent
🧩 Analysis chain
Breaking change: DashSPVFFI target removed.
The removal of the DashSPVFFI target and the comment indicating it's "now provided by unified SDK in dashpay-ios" represents a significant architectural change. This means:
- This package can no longer build standalone
- Users must now depend on the unified SDK for SPV functionality
Ensure this breaking change is properly documented in release notes and migration guides.
🏁 Script executed:
#!/bin/bash
# Check if there are any remaining references to DashSPVFFI in Swift code
fd -e swift . -x grep -l "DashSPVFFI" {}Length of output: 1148
I want to confirm how and where DashSPVFFI is still referenced in the source to determine impact. Let's pull in the import statements and the beginning of the bridge file:
🏁 Script executed:
#!/bin/bash
# Find import statements referring to DashSPVFFI
grep -R "import .*DashSPVFFI" -n swift-dash-core-sdk
# Show first 50 lines of FFIBridge.swift for context
sed -n '1,50p' swift-dash-core-sdk/Sources/SwiftDashCoreSDK/Core/FFIBridge.swiftLength of output: 3121
Document DashSPVFFI removal as a breaking change
The removal of the DashSPVFFI target is indeed a breaking change—every Swift source in this package still imports DashSPVFFI, so consumers must now pull in the unified SDK to satisfy these imports. Please update documentation and migration materials accordingly.
• SwiftDashCoreSDK/Core/FFIBridge.swift
• SwiftDashCoreSDK/Core/FFITypes.swift
• SwiftDashCoreSDK/Core/SPVClient+Verification.swift
• SwiftDashCoreSDK/Core/SPVClient.swift
• SwiftDashCoreSDK/Core/SPVClientConfiguration.swift
• SwiftDashCoreSDK/Models/Balance.swift
• SwiftDashCoreSDK/Models/Network.swift
• SwiftDashCoreSDK/Models/SPVStats.swift
• SwiftDashCoreSDK/Models/SyncProgress.swift
• SwiftDashCoreSDK/Models/Transaction.swift
• SwiftDashCoreSDK/Models/UTXO.swift
• SwiftDashCoreSDK/Models/ValidationMode.swift
• SwiftDashCoreSDK/Types/MempoolTypes.swift
• SwiftDashCoreSDK/Wallet/WalletManager.swift
• Tests/SwiftDashCoreSDKTests/MempoolTests.swift
Action items:
- Add a clear note in the release notes about removing the FFI target and now depending on the unified SDK.
- Update the Package.swift README and any migration guide to show how to add the unified SDK (e.g.
dashpay-ios) so thatimport DashSPVFFIcontinues to resolve. - Optionally, provide a sample snippet demonstrating how to include the unified SDK in a client project’s dependencies.
🤖 Prompt for AI Agents
In swift-dash-core-sdk/Package.swift around lines 26 to 27, document that
removing the DashSPVFFI target is a breaking change and instruct consumers to
add the unified SDK (e.g. dashpay-ios) as a dependency so imports of DashSPVFFI
resolve; update the release notes to explicitly state the removal and migration
steps, modify the Package.swift README and any migration guide to show exact
instructions for adding the unified SDK to client projects (Swift Package
Manager, CocoaPods, or SPM entry), and add a short sample snippet in the README
demonstrating how to include the unified SDK dependency in a client project so
users can continue to use import DashSPVFFI.
| .target( | ||
| name: "SwiftDashCoreSDK", | ||
| dependencies: ["DashSPVFFI"], | ||
| dependencies: [], |
There was a problem hiding this comment.
💡 Verification agent
🧩 Analysis chain
Dependency removal needs verification.
The SwiftDashCoreSDK target no longer depends on DashSPVFFI. Verify that all functionality previously provided by DashSPVFFI is available through the unified SDK and that existing code won't break.
🏁 Script executed:
#!/bin/bash
# Check for any imports or usage of DashSPVFFI in SwiftDashCoreSDK sources
rg -t swift "import DashSPVFFI|DashSPVFFI\." Sources/SwiftDashCoreSDK/Length of output: 138
Let’s search the entire repo for any references to DashSPVFFI, not just within a specific path:
🏁 Script executed:
#!/bin/bash
# Recursively search for any imports or usage of DashSPVFFI across the repo
rg -t swift "import DashSPVFFI" .
rg "DashSPVFFI" .Length of output: 5646
Restore or Update the DashSPVFFI Dependency in Package.swift
The SwiftDashCoreSDK target manifest no longer lists DashSPVFFI, yet many source files still import and rely on its types and APIs. Removing this dependency will break the build unless you:
- Replace every
import DashSPVFFIand its usages with equivalents from the unified SDK, or - Re-add
DashSPVFFI(or the unified SDK package providing it) to the target’sdependenciesarray inswift-dash-core-sdk/Package.swift.
Key affected files include (but aren’t limited to):
• Sources/SwiftDashCoreSDK/Core/SPVClient.swift
• Sources/SwiftDashCoreSDK/Core/FFIBridge.swift
• Sources/SwiftDashCoreSDK/Types/MempoolTypes.swift
• Sources/SwiftDashCoreSDK/Models/UTXO.swift
• Tests/SwiftDashCoreSDKTests/MempoolTests.swift
Please update the manifest or code accordingly so that all SPV functionality remains available.
🤖 Prompt for AI Agents
In swift-dash-core-sdk/Package.swift around line 61, the SwiftDashCoreSDK
target's dependencies array no longer includes DashSPVFFI which breaks many
imports; restore the missing dependency by adding the DashSPVFFI (or the unified
SDK package that provides it) entry to the target's dependencies array with the
correct package URL and version requirement, or alternatively update all source
files to remove and replace every import DashSPVFFI and its API usages with
equivalents from the unified SDK; ensure Package.swift references the exact
package name and version that matches the code, then run swift build/tests to
confirm all SPV-related files compile.
| // Detailed sync callbacks | ||
| private let detailedSyncProgressCallback: @convention(c) (UnsafePointer<FFIDetailedSyncProgress>?, UnsafeMutableRawPointer?) -> Void = { ffiProgress, userData in | ||
| print("🟢 detailedSyncProgressCallback called from FFI") | ||
| guard let userData = userData, | ||
| let ffiProgress = ffiProgress else { return } | ||
| let ffiProgress = ffiProgress else { | ||
| print("🟢 userData or ffiProgress is nil") | ||
| return | ||
| } | ||
|
|
||
| print("🟢 Getting holder from userData") | ||
| let holder = Unmanaged<DetailedCallbackHolder>.fromOpaque(userData).takeUnretainedValue() | ||
| print("🟢 Calling holder.progressCallback") | ||
| // Pass the FFI progress directly, conversion will happen in the holder's callback | ||
| holder.progressCallback?(ffiProgress.pointee) | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Don’t spam stdout in hot FFI callbacks
These prints will execute at high frequency during sync and will significantly degrade performance and flood logs. Gate behind DEBUG or remove.
- print("🟢 detailedSyncProgressCallback called from FFI")
+ #if DEBUG
+ print("🟢 detailedSyncProgressCallback called from FFI")
+ #endif
@@
- let ffiProgress = ffiProgress else {
- print("🟢 userData or ffiProgress is nil")
- return
- }
+ let ffiProgress = ffiProgress else {
+ #if DEBUG
+ print("🟢 userData or ffiProgress is nil")
+ #endif
+ return
+ }
@@
- print("🟢 Getting holder from userData")
+ #if DEBUG
+ print("🟢 Getting holder from userData")
+ #endif
@@
- print("🟢 Calling holder.progressCallback")
+ #if DEBUG
+ print("🟢 Calling holder.progressCallback")
+ #endif📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Detailed sync callbacks | |
| private let detailedSyncProgressCallback: @convention(c) (UnsafePointer<FFIDetailedSyncProgress>?, UnsafeMutableRawPointer?) -> Void = { ffiProgress, userData in | |
| print("🟢 detailedSyncProgressCallback called from FFI") | |
| guard let userData = userData, | |
| let ffiProgress = ffiProgress else { return } | |
| let ffiProgress = ffiProgress else { | |
| print("🟢 userData or ffiProgress is nil") | |
| return | |
| } | |
| print("🟢 Getting holder from userData") | |
| let holder = Unmanaged<DetailedCallbackHolder>.fromOpaque(userData).takeUnretainedValue() | |
| print("🟢 Calling holder.progressCallback") | |
| // Pass the FFI progress directly, conversion will happen in the holder's callback | |
| holder.progressCallback?(ffiProgress.pointee) | |
| } | |
| // Detailed sync callbacks | |
| private let detailedSyncProgressCallback: @convention(c) (UnsafePointer<FFIDetailedSyncProgress>?, UnsafeMutableRawPointer?) -> Void = { ffiProgress, userData in | |
| #if DEBUG | |
| print("🟢 detailedSyncProgressCallback called from FFI") | |
| #endif | |
| guard let userData = userData, | |
| let ffiProgress = ffiProgress else { | |
| #if DEBUG | |
| print("🟢 userData or ffiProgress is nil") | |
| #endif | |
| return | |
| } | |
| #if DEBUG | |
| print("🟢 Getting holder from userData") | |
| #endif | |
| let holder = Unmanaged<DetailedCallbackHolder>.fromOpaque(userData).takeUnretainedValue() | |
| #if DEBUG | |
| print("🟢 Calling holder.progressCallback") | |
| #endif | |
| // Pass the FFI progress directly, conversion will happen in the holder's callback | |
| holder.progressCallback?(ffiProgress.pointee) | |
| } |
🤖 Prompt for AI Agents
In swift-dash-core-sdk/Sources/SwiftDashCoreSDK/Core/SPVClient.swift around
lines 368 to 382, the detailedSyncProgressCallback currently logs multiple print
statements on every FFI callback which will flood stdout and hurt performance;
remove the prints or guard them behind a compile-time or runtime debug flag
(e.g. wrap in #if DEBUG ... #endif or check a lightweight boolean) and ensure
the callback performs only minimal, non-allocating work (call
holder.progressCallback with the ffiProgress.pointee) so the hot path remains
fast and side-effect free.
8180b4f to
1caa7b6
Compare
…hpay#98) * refactor(swift-dash-core-sdk): suppress cargo build output on success Only display cargo build logs when builds fail, showing clean checkmarks for successful builds. This makes the build output more concise and easier to read while still providing full diagnostics on failure. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com> * Remove vestigial terminal block infrastructure from dash-spv This commit removes the obsolete terminal block system that was previously used for sync optimization but has been superseded by sequential sync. Changes: - Remove all terminal block storage methods from StorageManager trait - Remove StoredTerminalBlock type and related data structures - Delete terminal block data files (mainnet/testnet JSON files) - Remove terminal block sync modules and managers - Clean up backup files (.backup, _old.rs) - Remove unused terminal block tests and examples - Remove terminal block documentation The dash-spv binary continues to work correctly, reaching sync height of 1306349+ on testnet as verified by testing. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com> * chore(dash-spv): remove obsolete test examples Removed 5 obsolete example files that were using old sync patterns: - test_headers2.rs and test_headers2_fix.rs (superseded by sequential sync) - test_genesis.rs, test_header_count.rs, test_initial_sync.rs (test utilities) Baseline sync verified: Headers: 1306354 🤖 Generated with Claude Code: https://claude.ai/code Co-Authored-By: Claude <noreply@anthropic.com> * chore(dash-spv): remove dead code and clean up comments - Removed interleaved sync reference in sequential/mod.rs comment - Removed DKG window reference in sequential/mod.rs comment - Removed unused SAVE_INTERVAL_SECS constant in storage/disk.rs - Removed #[allow(dead_code)] from process_header_with_fork_detection (method is used) - Removed unused extract_address_from_script method in transaction_processor.rs Baseline sync verified: Headers: 1306356 🤖 Generated with Claude Code: https://claude.ai/code Co-Authored-By: Claude <noreply@anthropic.com> * fix(dash-spv): partial test compilation fixes Fixed several test compilation issues: - Fixed create_mock_mn_list_diff reference in validation_test.rs - Replaced ValidationMode::CheckpointsOnly with ValidationMode::Basic - Fixed ClientConfig builder pattern (removed with_max_peers) - Fixed WatchItem::Address structure (removed label field) Note: 227 test compilation errors remain due to API changes. These will require more extensive refactoring in a future commit. Main code still compiles and baseline sync verified: Headers: 1306357 🤖 Generated with Claude Code: https://claude.ai/code Co-Authored-By: Claude <noreply@anthropic.com> * chore(dash-spv): remove obsolete test examples Removed 3 obsolete headers2 test files that are no longer relevant: - headers2_test.rs - headers2_protocol_test.rs - headers2_transition_test.rs These tests were for the old headers2 implementation which has been superseded. Baseline sync verified: Headers: 1306358 🤖 Generated with Claude Code: https://claude.ai/code Co-Authored-By: Claude <noreply@anthropic.com> * chore(dash-spv): remove more vestigial test code - Removed phase2_integration_test.rs and phase3_integration_test.rs These tests were for deleted features (parallel, recovery, scheduler, batching) - Fixed some import paths in chainlock_validation_test.rs - Updated DashSpvClient::new() calls to use single-argument API Progress: Reduced test errors from 227 to 212 Main code still compiles and baseline sync verified: Headers: 1306365 🤖 Generated with Claude Code: https://claude.ai/code Co-Authored-By: Claude <noreply@anthropic.com> * fix(dash-spv): partial test compilation fixes Phase 1 completed: Fixed simple test errors - error_types_test: Fixed borrow checker issue - qrinfo_integration_test: Removed references to deleted config fields - block_download_test: Fixed Result unwrapping Phase 2 progress: Partial lib test fixes - Fixed Wallet::new() to include storage parameter - Made create_mock_mn_list_diff public - Reduced lib test errors from 88 to 84 Tests now compiling: 24 of 28 integration tests pass Remaining work: 3 complex integration tests + remaining lib tests 🤖 Generated with Claude Code: https://claude.ai/code Co-Authored-By: Claude <noreply@anthropic.com> * fix(dash-spv): major progress on test compilation fixes Phase 1 Completed: Fixed MockNetworkManager - Implemented all required NetworkManager trait methods - Fixed return types (Option<NetworkMessage>, NetworkResult) - Added missing methods: as_any, peer_count, peer_info, send_ping, handle_ping, etc. - Fixed ServiceFlags import path to dashcore::network::constants::ServiceFlags Phase 2 Progress: Fixed many struct field issues - Updated PeerInfo construction with correct fields - Fixed DiskStorageManager::new to use await (async) - Replaced UInt256 references with BlockHash::all_zeros() - Fixed Wallet::new() to include storage parameter Current Status: - 24 of 28 integration tests now compile (was 21) - chainlock_validation_test reduced from 47 to 22 errors - Lib test errors reduced from 88 to ~84 Remaining work: Complete struct fixes, StorageManager implementations, and remaining API updates 🤖 Generated with Claude Code: https://claude.ai/code Co-Authored-By: Claude <noreply@anthropic.com> * fix: resolve various test failures * remove dash-spv-data * rm test checkpoint data * fix(dash-spv): use temporary directory by default instead of ./dash-spv-data Changes the default data directory behavior to create unique temporary directories in /tmp rather than using ./dash-spv-data in the project directory. This prevents cluttering the workspace and aligns with test behavior. - Default: Creates /tmp/dash-spv-{timestamp}-{pid} directory - Explicit: --data-dir flag still works for custom paths - Tests: Already use TempDir and are unaffected 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com> * fix(dash-spv): remove obsolete StoredTerminalBlock references from tests The StoredTerminalBlock type and related methods (store_terminal_block, get_terminal_block, clear_terminal_block) were removed from the StorageManager trait but references remained in test mock implementations. - Removed terminal block method implementations from MockStorageManager in error_handling_test.rs and error_recovery_integration_test.rs - These methods are no longer part of the StorageManager trait Note: These test files still have other compilation issues with their mock implementations that need separate fixes. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com> * test(dash-spv): improve error message for SyncManager creation failure Replace unwrap() with expect() in block_download_test.rs to provide clearer error messages when test setup fails. This makes test failures more actionable by showing what specifically failed during initialization. - Changed SyncManager::new().unwrap() to use .expect() with descriptive message - Test verified: all 9 tests in block_download_test pass 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com> * fix(dash-spv): remove deprecated ClientConfig fields from tests Remove the deprecated enable_qr_info and qr_info_fallback fields from network/tests.rs to match the updated ClientConfig structure. These fields are no longer part of the configuration API. - Removed enable_qr_info and qr_info_fallback fields from test config - Kept qr_info_extra_share and qr_info_timeout which are still valid - Tests compile successfully with the updated config 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com> * chore: remove obsolete planning documents Remove QRInfo and Phase 3 planning documents that are no longer needed as the implementation has been completed. - Removed PLAN_QRINFO.md - Removed PHASE_3_IMPLEMENTATION.md - Removed PLAN_QRINFO_2.md 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com> * chore: remove obsolete documentation files from dash-spv Remove temporary documentation files that were created during the QRInfo implementation and code removal process. These files served their purpose during development and are no longer needed. - Removed QRINFO_COMPLETION_SUMMARY.md - Removed QRINFO_FINAL_STATUS.md - Removed QRINFO_IMPLEMENTATION_STATUS.md - Removed QRINFO_INTEGRATION_COMPLETE.md - Removed REMOVAL_EXECUTION_SUMMARY.md - Removed REMOVAL_PLAN.md 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com> --------- Co-authored-by: quantum <quantum@pastas-Mac-Studio.local> Co-authored-by: Claude <noreply@anthropic.com> Co-authored-by: dcg <dcg@dcgs-Mac-Studio.local>
…d checkpoint sync Previously, the status display used different logic for calculating header heights: - Checkpoint sync: correctly used sync_base_height + storage_count - Genesis sync: incorrectly checked chain_state.headers which was kept minimal This caused status to show all zeros when syncing from genesis, while headers were actually being synced to disk successfully. The fix unifies both paths to use the same formula: - Genesis sync: 0 + storage_count = actual height - Checkpoint sync: checkpoint_height + storage_count = actual height This ensures accurate progress reporting regardless of sync method. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
ec29139 to
4fd651f
Compare
…loom-filters-chain-management
…loom-filters-chain-management
This pull request introduces significant updates to the
CLAUDE.mdfile, adds new dependencies and features to thedash-spv-fficrate, and enhances the FFI layer for Dash's SPV client. The changes streamline documentation, improve functionality, and expand support for mempool tracking and synchronization in the Dash SPV client.Documentation Updates:
CLAUDE.md: Rewritten to focus on the Dash cryptocurrency protocol library, replacing the SPARC methodology with detailed descriptions of the repository structure, build commands, testing commands, and key features. Removed SPARC-specific workflows and commands.Dependency Enhancements:
dash-spv-ffi/Cargo.toml: Addedenv_loggerandtracingdependencies to improve logging and tracing capabilities.FFI Layer Improvements:
dash-spv-ffi/include/dash_spv_ffi.h: Introduced new enums (FFIMempoolStrategy,FFISyncStage) and structs (FFIDetailedSyncProgress) for enhanced synchronization and mempool tracking. Added new callback types for mempool events and updatedFFIEventCallbacksto include these callbacks. [1] [2] [3]dash-spv-ffi/include/dash_spv_ffi.h: Added new methods for mempool tracking (dash_spv_ffi_client_enable_mempool_tracking,dash_spv_ffi_client_get_balance_with_mempool) and synchronization (dash_spv_ffi_client_sync_to_tip_with_progress,dash_spv_ffi_client_cancel_sync).Configuration Updates:
dash-spv-ffi/cbindgen.toml: Excluded specific callback types (Option_BlockCallback,Option_TransactionCallback,Option_BalanceCallback) from export to simplify the FFI configuration.Modulemap Addition:
dash-network-ffi/src/dash_network_ffiFFI.modulemap: Added a new modulemap for the Dash Network FFI, enabling better integration with Darwin and built-in types.Summary by CodeRabbit