Skip to content

Conversation

@ivandika3
Copy link
Contributor

@ivandika3 ivandika3 commented Dec 29, 2025

What changes were proposed in this pull request?

Currently, datanode has an option to flush the write on chunk boundary (hdds.container.chunk.write.sync) which is disabled by default since it might affect the DN write throughput and latency. However, disabling this means that if the datanode machine is suddenly down (e.g. power failure, reaped by OOM killer), this might cause the file to have incomplete data even if PutBlock (write commit) is successful which violates our durability guarantee. Although PutBlock triggers FilePerBlockStrategy#finishWriteChunks which will trigger close (RandomAccessFile#close), the buffer cache might not be flushed yet since closing a file does not imply that the buffer cache for the file is flushed (see https://man7.org/linux/man-pages/man2/close.2.html). So there might be a chance where the user's key is committed, but the data do not exist in datanodes.

However, flushing for every WriteChunk might cause unnecessary overhead. We might need to consider calling FileChannel#force on PutBlock instead of WriteChunk since the data is only visible for users when PutBlock returns successfully (the data is committed) and for failure the client will try to replace the block (allocate another block). Therefore, we can guarantee that the after user successfully uploaded the key, the data has been persistently stored in the leader and at least one follower promise to flush the data (MAJORITY_COMMITTED).

This might still affect the write throughput and latency due to waiting for the buffer cached to be flushed to persistent storage (ssd or disk), but will increase our data durability guarantee (which should be our priority). Flushing the buffer cache might also reduce the memory usage of datanode.

In the future, we should consider enabling hdds.container.chunk.write.sync by default.

What is the link to the Apache JIRA

https://issues.apache.org/jira/browse/HDDS-14246

How was this patch tested?

CI when sync is enabled (https://github.com/ivandika3/ozone/actions/runs/20535392231)

@ivandika3 ivandika3 marked this pull request as ready for review December 30, 2025 01:15
@rich7420
Copy link
Contributor

thanks @ivandika3 for the patch!

Comment on lines 89 to 90
in the container happen as sync I/0 or buffered I/O operation. For FilePerBlockStrategy, this
the sync I/O operation only happens before block file is closed.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
in the container happen as sync I/0 or buffered I/O operation. For FilePerBlockStrategy, this
the sync I/O operation only happens before block file is closed.
in the container happen as sync I/O or buffered I/O operation. For FilePerBlockStrategy, this
sync I/O operation only happens before block file is closed.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, updated.

@ivandika3 ivandika3 self-assigned this Jan 5, 2026
@swamirishi
Copy link
Contributor

@vyalamar Do you wanna take a look at this patch?

@swamirishi
Copy link
Contributor

@rnblough Do you wanna take a look at this issue?

@siddhantsangwan siddhantsangwan self-requested a review January 6, 2026 06:02
Copy link
Contributor

@siddhantsangwan siddhantsangwan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ivandika3 Thanks for working on this. I agree with the overall idea, will do a deeper review soon.

Copy link
Contributor

@siddhantsangwan siddhantsangwan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about the Ratis streaming writes? Will this change also affect that code path and do we need any handling there? CC @szetszwo

Please add some tests to verify this change.

Comment on lines 1085 to 1087
if (eob) {
chunkManager.finishWriteChunks(kvContainer, blockData);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we also need to sync in the else case here, when eob is false? Similar to the else case that you added in handlePutBlock.

Copy link
Contributor Author

@ivandika3 ivandika3 Jan 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FilePerBlockStrategy#finishWriteChunks calls FilePerBlockStrategy.OpenFiles#close that will trigger FilePerBlockStrategy.OpenFiles#close which calls OpenFile#close which will sync before closing the block file.

Comment on lines 89 to 90
in the container happen as sync I/0 or buffered I/O operation. For FilePerBlockStrategy, this
the sync I/O operation only happens before block file is closed.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove "the".

For FilePerBlockStrategy, this the sync

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated.

@ivandika3
Copy link
Contributor Author

ivandika3 commented Jan 8, 2026

What about the Ratis streaming writes? Will this change also affect that code path and do we need any handling there? CC @szetszwo

Streaming Write Pipeline sync is triggered by client and I have made it configurable in #9533 through ozone.client.datastream.sync.size configuration. In the future, we might need to revisit this. I expect this require 1) A way of keep track of the DataChannel (KeyValueStreamDataChannel) in DN 2) some logic to differentiate whether to get the FileChannel from FilePerBlockStrategy#OpenFiles. Or we can make StandardWriteOption.CLOSE to also trigger sync.

Please add some tests to verify this change.

Let me think about this. This requires some fault injection to trigger datanode crash just after PutBlock. Let me check if we can use byteman under ozone-fi for this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants