-
Notifications
You must be signed in to change notification settings - Fork 373
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update the storage service reading. #3011
base: main
Are you sure you want to change the base?
Update the storage service reading. #3011
Conversation
message_index: i64, | ||
num_chunks: i32, | ||
) -> Result<S, ServiceStoreError> { | ||
let mut value = Vec::new(); | ||
let mut handles = Vec::new(); | ||
println!("read_entries: message_index={message_index} num_chunks={num_chunks}"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm assuming this is a debug print?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, sorry, forgot to push.
linera-storage-service/src/server.rs
Outdated
@@ -45,10 +45,16 @@ enum ServiceStoreServerInternal { | |||
RocksDb(RocksDbStore), | |||
} | |||
|
|||
#[derive(Default)] | |||
struct BigRead { | |||
num_read: usize, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When I read this I was confused on wether this was the expected total number of reads or what it was. So maybe num_processed_chunks
or something like that would be a better name?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, done.
linera-storage-service/src/client.rs
Outdated
.collect::<Result<_, _>>()?; | ||
let value = values.into_iter().flatten().collect::<Vec<_>>(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ideally we would collect
only once. Not sure how to do it with combinators but a for-loop should work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, replaced with a for loop.
Indeed, I saw some potential issues with the two collect statements. But we seem to prefer using functional code, so that is why I used it.
index, | ||
}; | ||
let request = tonic::Request::new(query); | ||
let response = client.process_specific_chunk(request).await?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Curious what this was for
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The calls to read_value
, read_multi_values
, find_keys_by_prefix
and find_key_values_by_prefix
can return something of size higher than the one of the maximal size of GRPC. So, we need to split the read into several chunks which are processed one by one.
Motivation
The
read_entries
code of the storage-service is doing a blocking loop. That is inefficient from the viewpoint of concurrency.Proposal
The following was done:
Test Plan
The CI proved adequate. The
test_storage_service_big_raw_write
did catch the problem with the ordering of the requests.Release Plan
Not relevant as we do not use so far the
storage-service
in TestNet / DevNet.Links
None.