Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ability to chunk download from object store #274

Open
trungda opened this issue Jul 24, 2024 · 12 comments
Open

Ability to chunk download from object store #274

trungda opened this issue Jul 24, 2024 · 12 comments
Assignees
Labels
enhancement New feature or request

Comments

@trungda
Copy link
Contributor

trungda commented Jul 24, 2024

Is your feature request related to a problem or challenge? Please describe what you are trying to do.
When downloading large objects (> 300MBs) using object_store crate, I often hit timeout using the default configuration (30 seconds connection timeout). Interestingly, when increasing the timeout, the download speed is actual lower (not sure if it's the same for everyone?).

Describe the solution you'd like
I am thinking if it makes sense to chunk a file into smaller ranges (say, 100MB each), and in parallel, download each range with different connection and reconcile them under the same interface.

Describe alternatives you've considered
Not sure if such a capability can be composed using the existing interfaces.

Additional context

@trungda trungda added the enhancement New feature or request label Jul 24, 2024
@trungda
Copy link
Contributor Author

trungda commented Jul 24, 2024

I originally submitted this issue in the datafusion repo which I think is the wrong repo. Quote reply from @alamb

Thank you @trungda

I think it would be very interesting to build a "parallel downloader" ObjectStore implementation, though I am not sure it necessairly belongs in the core object_store crate (though it could be added if there is enough interest)

There might also be some interesting ideas to explore around "racing reads" to avoid latency

There are many good ideas in this paper, BTW: https://dl.acm.org/doi/10.14778/3611479.3611486

I think you could compose this kind of smart client from the existing interfaces

@tustvold
Copy link
Contributor

It should be relatively straightforward to achieve this using buffer_ordered from the futures crate, we may just need to document how to do this

@alamb
Copy link
Contributor

alamb commented Jul 25, 2024

Maybe it would make a good example

@trungda
Copy link
Contributor Author

trungda commented Jul 25, 2024

I can write an example. Using buffered is what we are doing to download multiple files concurrently. Something like this:

let parallelism = 10;
let mut downloaders = Vec::new();
for path in paths.iter() {
  downloaders.push(download(path)); <----This downloads the whole file.
}
let mut buffered = stream.buffered(parallelism);
while let Some(_) = buffered.next().await {}

But it's not obvious for me how to use the stream interface with bufferred, i.e., how can we reconcile different streams (from different parts of the file) into one stream, but is it something really needed?

@alamb
Copy link
Contributor

alamb commented Jul 26, 2024

how can we reconcile different streams (from different parts of the file) into one stream

I was imagining that it would look something like making multiple calls to ObjectStore::get_ranges for each file

@alamb
Copy link
Contributor

alamb commented Mar 20, 2025

@alamb
Copy link
Contributor

alamb commented Mar 20, 2025

I believe @crepererum is working on something like this, called "chunked downloading"

@alamb alamb transferred this issue from apache/arrow-rs Mar 20, 2025
@crepererum crepererum self-assigned this Mar 21, 2025
@crepererum
Copy link
Contributor

I do. We have code for that at InfluxData and I plan to upstream this in the following order:

  1. mock helper, to easier test object store wrappers
  2. an extension to tell object store wrappers a pre-known size of object (because we extensively use that for caching and it avoids that a normal ObjectStore::get request needs an initial head request to know what it should be chunking)
  3. the actual chunking implementation

@tustvold
Copy link
Contributor

tustvold commented Mar 21, 2025

FWIW my preference would be to build this into the store implementations, e.g. into GetClient, as opposed to adding further wrapper types. I'd very much like to move away from wrapping things at the ObjectStore interface.

Edit: Actually my real preference would be to build this into something akin to the buffered interfaces as opposed to baking it into ObjectStore at all. This would allow for out of order chunking, avoid the issue of providing size and Etag information, and generally be far more flexible...

@alamb
Copy link
Contributor

alamb commented Mar 21, 2025

FWIW my preference would be to build this into the store implementations, e.g. into GetClient, as opposed to adding further wrapper types. I'd very much like to move away from wrapping things at the ObjectStore interface.

Edit: Actually my real preference would be to build this into something akin to the buffered interfaces as opposed to baking it into ObjectStore at all. This would allow for out of order chunking, avoid the issue of providing size and Etag information, and generally be far more flexible...

What do you mean by "buffered interfaces" ?

I mean a more general implementation sounds great, but if we have one that is implemented as an ObjectStore wrapper that also seems fine to me (we could potentially always work on the more flexible implementation later)

@tustvold
Copy link
Contributor

tustvold commented Mar 21, 2025

What do you mean by "buffered interfaces" ?

I am referring to things like BufReader.

if we have one that is implemented as an ObjectStore wrapper that also seems fine to me

My understanding from Marco's comment is that we would need to use the extension mechanism in order to get the size (and possibly ETag) through to the wrapper. Given this already implies a non-standard invocation of the ObjectStore::get API by the caller, I don't really see the advantage over using a separate utility helper akin to BufReader in order to achieve this. We avoid overloading the ObjectStore interface, can return data out of order, and have a more clean and focused API.

a more general implementation sounds great

TBC I am not suggesting an initial cut needs to implement all of the above, but that we should adopt an approach to this issue that allows for this down the line. Tbh the utility approach should be significantly simpler than an ObjectStore wrapper.

@alamb
Copy link
Contributor

alamb commented Mar 21, 2025

The utility approach certainly looked nice with an alternate tokio runtime

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

4 participants