Skip to content

Implement record chunking for "aes128gcm" scheme.#60

Merged
rfk merged 1 commit intomainfrom
aes128gcm-record-chunking
Mar 30, 2021
Merged

Implement record chunking for "aes128gcm" scheme.#60
rfk merged 1 commit intomainfrom
aes128gcm-record-chunking

Conversation

@rfk
Copy link
Copy Markdown
Contributor

@rfk rfk commented Mar 25, 2021

This restores previous functionality where "aes128gcm" could chunk
large payloads into multiple records. It is using a different
algorithm than the previous implementation, but one that I think
is easier to understand (or at least, better documented 😅).

I want to add more tests, but pushing for visibility.

Connects to #55.

/// * The plaintext is distributed as evenly as possible between records. Records
/// consisting entirely of padding will only be produced in degenerate cases such
/// as where the caller requested far more padding than available plaintext, or
/// where the requested total size falls just beyond a record boundary.
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll admit, I couldn't entirely get my head around the previous record-chunking algorithm, but as far as I could tell it was doing a kind of greedy allocation of as much padding a possible to the start of the record stream. For example, if you requested 10 bytes of plaintext and 10 bytes of padding split between two records, you'd get 1 plaintext + 9 padding in the first record, then 9 plaintext and 1 padding in the second.

My read of the padding-related comments in the RFC suggests that we should instead by trying to distribute the plaintext as evenly as possible between the records, to reduce potential for observable timing differences in the processing of each record. I have tried to implement such here (and I intend to add some tests to demonstrate on concrete examples).

I have also removed an error case where we would refuse to ever generate a record consisting entirely of padding. The RFC does specifically warn against naively generating such records, and the algorithm here will only do so under particularly degenerate cases. But IMHO it may be legit for an application to want to pad out e.g. a very short message to a large fixed length, and it's not helpful for us to prevent it from doing so.

I believe we inherited this error case from the C ece library, but note that Martin's nodejs implementation does not seem to error out in this case, so I feel pretty comfortable in removing it.

(It's also true that in the current setup, if a caller did hit this error then there's nothing they can do about it, because we don't expose the ability to select the amount of padding; callers might just accidentally land on a plaintext + padding size that produces a padding-only block out of bad luck).

@rfk rfk force-pushed the aes128gcm-record-chunking branch 3 times, most recently from ec8b57e to 6a30d02 Compare March 25, 2021 06:24
@rfk rfk requested a review from jrconlin March 25, 2021 06:24
@rfk
Copy link
Copy Markdown
Contributor Author

rfk commented Mar 25, 2021

Alrighty, I've added some tests here and it's behaving as I expect, so I think this is ready for review @jrconlin.

@rfk rfk marked this pull request as ready for review March 25, 2021 06:25
@rfk rfk changed the base branch from disentangle-the-trait to main March 25, 2021 06:25
@rfk rfk force-pushed the aes128gcm-record-chunking branch 2 times, most recently from 4f4bad9 to ec662b9 Compare March 25, 2021 06:30
This restores previous functionality where "aes128gcm" could chunk
large payloads into multiple records. It is using a different
algorithm than the previous implementation, but one that I think
is easier to understand (or at least, better documented 😅).
@rfk rfk force-pushed the aes128gcm-record-chunking branch from ec662b9 to 01ef7a9 Compare March 25, 2021 06:36
@rfk rfk requested a review from mhammond March 25, 2021 06:36
@rfk
Copy link
Copy Markdown
Contributor Author

rfk commented Mar 25, 2021

@mhammond if you're happy to take a look here, I'd appreciate a fresh set of eyes on my hypothesis that this chunking algorithm is "easier to understand"...

Copy link
Copy Markdown
Member

@mhammond mhammond left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Far out, I'm glad I didn't need to read the RFC and decypher (ha!) what needed doing here. This looks fine to me - code is clear, well organized and well tested, which is probably the best I can offer here.

@rfk
Copy link
Copy Markdown
Contributor Author

rfk commented Mar 30, 2021

I'm going to go ahead and merge this on the strength of Marks' review, the fact that it successfully round-trips through our existing multi-record-decoding setup, and the fact that I was able to manually send a 5000+ byte send-tab payload from the Rust code to Firefox and confirmed that it was chunked into multiple records and was received correctly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants