Adjusting ssl_read_from_net to read limited amount of data in one go.#2249
Adjusting ssl_read_from_net to read limited amount of data in one go.#2249shinrich merged 1 commit intoapache:masterfrom
Conversation
maskit
left a comment
There was a problem hiding this comment.
Tried 100MB POST request and it works fine for me.
|
pdm also verified that this fix works in his environment. |
|
@oknet can you review this please as well, so we can get this shipped into 7.1.0 too? |
|
Should we land this, or wait for @oknet ? |
|
I think we can push on. This addresses the issues that @oknet raised in the previous PR's. |
|
Ship it! |
|
Naieve question though: Is this going to limit or affect throughput and performance on TLS sessions? |
|
The The The implement of these functions in UnixNetVC are very vague. |
| event = (s->vio.ntodo() <= 0) ? SSL_READ_COMPLETE : SSL_READ_READY; | ||
| if (sslErr == SSL_ERROR_NONE && s->vio.ntodo() > 0) { | ||
| // We stopped with data on the wire (to avoid overbuffering). Make sure we are triggered | ||
| sslvc->read.triggered = 1; |
There was a problem hiding this comment.
The triggered is only cleared in net_read_io and set in NetHandler::mainNetEvent.
Here, just return SSL_READ_READY to inform net_read_io and it will call readReschedule.
There was a problem hiding this comment.
Yes, we should return SSL_READ_READY. Will fix that. However, we should also set read trigger. Otherwise, we will be waiting until the next read event. If there is read data left, we will not get another read event on the epoll.
There was a problem hiding this comment.
The triggered is set in NetHandler::mainNetEvent().
There is no one to clear triggered in ssl_read_from_net().
It is only cleared in SSLNetVC::net_read_io() if SSL_READ_WOULD_BLOCK or SSL_READ_EOS or SSL_READ_ERROR is returned from ssl_read_from_net().
In another word, the read.triggered is always TRUE here, you can try ink_assert(sslvc->read.triggered) here.
iocore/net/SSLNetVConnection.cc
Outdated
| Warning("Cannot add new block"); | ||
| // If we filled up one block, give back to the event loop so we don't | ||
| // overbuffer. | ||
| if (bytes_read > 0) { |
There was a problem hiding this comment.
I believe this condition check breaks the watermark design in IOBuffer.
We can use readv() in UnixNetVC::read_from_net to read data into multi IOBufferBlocks if the rest space in the first IOBufferBlock is less than watermark size.
But there is only SSLReadBuffer, ATS should read() data for every IOBufferBlocks that is attached into MIOBuffer.
There was a problem hiding this comment.
Ok, I will replace my logic that is going per block and explicitly adding blocks and call buf.writer()->writer_avail() instead and rely on that to do the appropriate water mark calculations and add new blocks as necessary.
|
@zwoop By keeping the trigger bit set in the case where data is left on the wire we should not run into performance problems. |
9f60842 to
42f598f
Compare
|
Do we really want the tsxs changes in this PR? That seems odd, ought to be a separate PR I think. Also, before we land this, make sure to squash into one commit. This is running on Docs btw. |
365b1d3 to
7e7ed6d
Compare
|
Removed the tsxs changes, squashed, and pushed clearing previous approves. |
|
What y'all say, should we land this? I've run it on Docs right problems. |
|
I think we should land it. Someone will need to re-approve since my last push cleared @oknet 's approval. |
zwoop
left a comment
There was a problem hiding this comment.
Ok, lets land this and start testing in real production.
|
Cherry-picked to 7.1.x |
Addressing problem seen by pdm. Large POSTS would cause the inbound inactivity timeout to go off killing the transaction.
Working with @oknet and @maskit, we reworked the ssl_read_from_net to not read as much as possible. Rather read enough to fill one buffer block then give control back to the event loop. The theory is that a fast client would keep ATS in the read loop and buffering up the post body before passing along the data. Setting the read trigger if there is still data on the wire.