Implement record chunking for "aes128gcm" scheme.#60
Conversation
| /// * The plaintext is distributed as evenly as possible between records. Records | ||
| /// consisting entirely of padding will only be produced in degenerate cases such | ||
| /// as where the caller requested far more padding than available plaintext, or | ||
| /// where the requested total size falls just beyond a record boundary. |
There was a problem hiding this comment.
I'll admit, I couldn't entirely get my head around the previous record-chunking algorithm, but as far as I could tell it was doing a kind of greedy allocation of as much padding a possible to the start of the record stream. For example, if you requested 10 bytes of plaintext and 10 bytes of padding split between two records, you'd get 1 plaintext + 9 padding in the first record, then 9 plaintext and 1 padding in the second.
My read of the padding-related comments in the RFC suggests that we should instead by trying to distribute the plaintext as evenly as possible between the records, to reduce potential for observable timing differences in the processing of each record. I have tried to implement such here (and I intend to add some tests to demonstrate on concrete examples).
I have also removed an error case where we would refuse to ever generate a record consisting entirely of padding. The RFC does specifically warn against naively generating such records, and the algorithm here will only do so under particularly degenerate cases. But IMHO it may be legit for an application to want to pad out e.g. a very short message to a large fixed length, and it's not helpful for us to prevent it from doing so.
I believe we inherited this error case from the C ece library, but note that Martin's nodejs implementation does not seem to error out in this case, so I feel pretty comfortable in removing it.
(It's also true that in the current setup, if a caller did hit this error then there's nothing they can do about it, because we don't expose the ability to select the amount of padding; callers might just accidentally land on a plaintext + padding size that produces a padding-only block out of bad luck).
ec8b57e to
6a30d02
Compare
|
Alrighty, I've added some tests here and it's behaving as I expect, so I think this is ready for review @jrconlin. |
4f4bad9 to
ec662b9
Compare
This restores previous functionality where "aes128gcm" could chunk large payloads into multiple records. It is using a different algorithm than the previous implementation, but one that I think is easier to understand (or at least, better documented 😅).
ec662b9 to
01ef7a9
Compare
|
@mhammond if you're happy to take a look here, I'd appreciate a fresh set of eyes on my hypothesis that this chunking algorithm is "easier to understand"... |
mhammond
left a comment
There was a problem hiding this comment.
Far out, I'm glad I didn't need to read the RFC and decypher (ha!) what needed doing here. This looks fine to me - code is clear, well organized and well tested, which is probably the best I can offer here.
|
I'm going to go ahead and merge this on the strength of Marks' review, the fact that it successfully round-trips through our existing multi-record-decoding setup, and the fact that I was able to manually send a 5000+ byte send-tab payload from the Rust code to Firefox and confirmed that it was chunked into multiple records and was received correctly. |
This restores previous functionality where "aes128gcm" could chunk
large payloads into multiple records. It is using a different
algorithm than the previous implementation, but one that I think
is easier to understand (or at least, better documented 😅).
I want to add more tests, but pushing for visibility.Connects to #55.