Make S3 MultipartUpload Concurrency Configurable - Address S3 Timeouts#24330
Make S3 MultipartUpload Concurrency Configurable - Address S3 Timeouts#24330J-Light wants to merge 2 commits intonextcloud:masterfrom
Conversation
Signed-off-by: Nish Joseph <nish.joseph@leapaust.com.au>
|
@acsfer I think the idea is good. Since setting it to 3 I when from 100% failure to about 20 to 30% failure. I am going to keep this for a few more days to assess. Setting to 1 would also be a good default, its something I will be testing the coming days. |
|
Any progress on this front? Upgraded my instance and noticed errors again and remembered I needed to include this. |
This comment was marked as resolved.
This comment was marked as resolved.
| 'bucket' => $this->bucket, | ||
| 'key' => $urn, | ||
| 'part_size' => $this->uploadPartSize, | ||
| 'concurrency' => $this->config->getSystemValue('objectstore.arguments.concurrency', 5), |
There was a problem hiding this comment.
This should rather be passed as a param of the existing object store configuration which are parsed in the S3ConnectionTrait, similar to how it is done for the uploadPartSize already:
|
@juliushaertl does this clash somehow with the chunked upload PR or is still relevant and useful in its own way ? |
|
Still relevant I'd say, for cases where chunked upload is not working, e.g. using a random webdav client or Windows webdav mount. |
|
Yes I agree as well. It is still relevant. |
|
I'd like to point out that this is still relevant in 25.0.4. |
|
With #27034 merged for Nextcloud 26, what does this PR then still improve / fix? |
|
See #24330 (comment) |
|
@J-Light would you be alright to address Julius's comment :) If not, I will close this PR for inactivity unless someone else is willing to take over 🙏 |
@juliushaertl can you explain which components communicate via which protocol and where your proposed setting for concurrency come into play? From what you said I imagine a WebDAV client uploading a large file to Nextcloud. That then uploads this large file to an S3 storage and uses multipart due to the file size to do so. |
|
Not actively using or working on this anymore in large part due to this issue. Have moved to OCIS. Happy to close this given how old this is. |
Created a new service with the latest Nextcloud docker image with S3 backend. File 4GB and larger fail to upload. Issue discussed #20519 and in forums
https://help.nextcloud.com/t/an-exception-occurred-while-uploading-parts-to-a-multipart-upload/47163
https://help.nextcloud.com/t/primary-storage-s3-files-large-more-than-4gb/59797
https://help.nextcloud.com/t/s3-primary-object-store-large-uploads-2gb-fail-with-exception-and-timeout-using-webdav-or-nextcloud-client-or-web-ui/67185
https://help.nextcloud.com/t/s3-random-storage-problem-on-large-files/72897
No concrete solution proposed.
The Socket Timeout error by trial and error is down to Concurrency level used by AWS's S3 SDK. The default is 5 and it may be stressing the container or server environments.
In my case setting this to a value of 3 explicitly resolved the issue of Socket Timeouts.
The PR make this configurable with a default value the value used as default in the SDK.
New to project to suggestions welcome.