You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Some cloud providers overwrite a file as an atomic operation that takes place on a call to writer.Close(). But for localfs and sftp, we're currently removing the file when the writer is opened and then we overwrite the object as we stream bytes to it. This creates a gap of time when the file is in an inconsistent state for those stores that don't support atomic replacement on Close().
The text was updated successfully, but these errors were encountered:
There isn't a backing file when you use store.NewWriter since it writes/pipes directly to the target source. I know GCS the best, so in GCS when you begin piping the data via a writer that data/file isn't visible until you call Close(). Then the new data is swapped into place, replacing any existing object and content.
We should discuss how big of a deal that is, maybe its fine? We'll never get each store to have exactly the same side effects. So, its just a best effort on our part.
Some cloud providers overwrite a file as an atomic operation that takes place on a call to writer.Close(). But for localfs and sftp, we're currently removing the file when the writer is opened and then we overwrite the object as we stream bytes to it. This creates a gap of time when the file is in an inconsistent state for those stores that don't support atomic replacement on Close().
The text was updated successfully, but these errors were encountered: