-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
reshape and document the dht-cache #36
base: master
Are you sure you want to change the base?
Conversation
lu-zero
commented
Jul 10, 2023
- Reduce the public API
- Document it
Codecov Report
@@ Coverage Diff @@
## master #36 +/- ##
==========================================
+ Coverage 71.72% 80.02% +8.30%
==========================================
Files 12 17 +5
Lines 1963 2959 +996
==========================================
+ Hits 1408 2368 +960
- Misses 555 591 +36
|
7ecbc1b
to
766a1cd
Compare
dht-cache/src/cache/local.rs
Outdated
let mut cache = self.0.write().await; | ||
|
||
if let Some(storage) = cache.store.as_mut() { | ||
storage.store(&elem).await; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🚫 [clippy] reported by reviewdog 🐶
error: this expression creates a reference which is immediately dereferenced by the compiler
--> dht-cache/src/cache/local.rs:87:27
|
87 | storage.store(&elem).await;
| ^^^^^ help: change this to: `elem`
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#needless_borrow
= note: `-D clippy::needless-borrow` implied by `-D warnings`
185cc26
to
6a3b9fc
Compare
8898e9e
to
40306f0
Compare
0132c71
to
1837b6c
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I still need to finish reviewing, I have two major concerns about use of unsafe
and select!
other things are minor.
|
||
for (topic_uuid, value) in map_topic_name.iter() { | ||
topic_uuid.hash(state); | ||
value.to_string().hash(state); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is pretty bad, because it requires N^2
allocations. Considering that a hash
fn should be fast, we have a potential problem, even from a cryptographic standpoint.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could call hash over the larger map, I moved the code around but I did not check if the hash value would change.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a way to easily split these comments into a separated issue? Obviously, for me it is easy to just check the diff, but I agree that the aim here is a bit different. If we can easily split issues that should be fixed in different PRs, it would be great.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sadly there isn't, it would be a nice feature though.
I'd do a round of performance improvements after we are good with the API rework :)
let topic_name = elem.topic_name.clone(); | ||
let topic_uuid = &elem.topic_uuid; | ||
|
||
let topic = cache.mem.entry(topic_name).or_default(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we could consider using the contains_key
+ eventual allocation and insertion, instead of using the entry
API. Last time I checked this approach for a HashMap
(and it was convenient to approach the problem in this way), but I never did any benchmark
dht-cache/src/cache/local.rs
Outdated
pub fn new(topic: &str, cache: LocalCache) -> Self { | ||
Self { | ||
topic: topic.to_owned(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pub fn new(topic: &str, cache: LocalCache) -> Self { | |
Self { | |
topic: topic.to_owned(), | |
pub fn new(topic: impl Into<String>, cache: LocalCache) -> Self { | |
Self { | |
topic: topic.into(), |
dht-cache/src/cache/local.rs
Outdated
pub fn with_uuid(mut self, uuid: &str) -> Self { | ||
self.uuid = Some(uuid.to_owned()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pub fn with_uuid(mut self, uuid: &str) -> Self { | |
self.uuid = Some(uuid.to_owned()); | |
pub fn with_uuid(mut self, uuid: impl Into<String>) -> Self { | |
self.uuid = Some(uuid.into()); |
0a87eb6
to
0bb9a1c
Compare
Should make easier testing the network component using libp2p_swarm_test.
The resend task would not interfere with the rest this way.
Co-authored-by: Edoardo Morandi <[email protected]>
0bb9a1c
to
6cf9414
Compare
5c8cb37
to
c4000eb
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, just a major concern, most of the things are minor or considerations.
return swarm | ||
} | ||
} | ||
ev = swarm.select_next_some() => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems to be cancel-safe, therefore the select!
should be fine. However, this characteristics does not seem to be documented anywhere, therefore I don't think we can assume that future non-breaking versions of libp2p will have this specific behavior. On the other hand, given that the implementation of Stream
currently forwards to the internal sync poll_next_event
function, it is very plausible that swarm.select_next_some()
will continue to be cancel-safe.
If you want to be sure, the resulting future could be saved and pinned in order to keep it across different iterations, but it is noisy and probably it is not worth at this point. What do you think?
Otherwise it would infiniloop it send the volatile message when only one of the two peers is ready.
dht-cache/src/dht.rs
Outdated
let peers: Vec<_> = check_peers(&mut swarm); | ||
if !peers.is_empty() && | ||
ev_send.send(Event::Ready(peers)).is_err() { | ||
return swarm; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Formatting:
let peers: Vec<_> = check_peers(&mut swarm); | |
if !peers.is_empty() && | |
ev_send.send(Event::Ready(peers)).is_err() { | |
return swarm; | |
} | |
let peers: Vec<_> = check_peers(&mut swarm); | |
if !peers.is_empty() && ev_send.send(Event::Ready(peers)).is_err() { | |
return swarm; | |
} |
8849e61
to
37210e4
Compare