-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Java: have a future cache #43
Java: have a future cache #43
Conversation
b23f592
to
9e911e0
Compare
9e911e0
to
7f1adff
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM as long as URI's equals
satisfies the use case and if there is no problem with threads sharing the futures if they do.
It satisfies, and completed future could be easily shared and reused. |
I don't understand this patch at all :-( Not it's motivation - is creating a future slow? - nor the implementation... Is it really ok in Java to return the same future twice? In Seastar, for example, it is a faux pas. If you didn't like the idiom used in the existing code, CompletableFuture<Endpoint> f = new CompletableFuture<Endpoint>();
f.complete(ret);
return f; how about using return CompletableFuture<Endpoint>.completedFuture(ret); This is a built-in idiom to return a ready value that is presumably implemented in the most efficient way possible - why do we need something more efficient that has dubious correctness? |
It depends, creating future allocates memory, if code works in normal circumstances it does not do much, but if it is CPU or memory bound, it causes performance degradation, if we can, we need to avoid that.
It is ok, in java future is just a class instance, final state of which to hold an instance of underlying type or exception.
It is still allocates and deallocates memory. |
You may be more up-to-date than me, as I said it's been 15 years since my last "serious" work in Java, but did work in Java heavily for 5 years before that so I have some experience that seems to contradict what you're saying and I wonder what I am missing. My experience is that much of Java code keeps creating objects that are just returned once and then abandoned. There is no "deallocation cost" - the "generational" GC is written with the knowledge that in Java most objects in the will be abandoned and not survive their first GC, and makes this use case very efficient (and in particular - NOT proportional to the amound of allocations done in the young generation). You are suggesting that Moreover, your alternative to completedFuture() isn't free either - in the best case it involves a hash table lookup. Are you sure that the memory allocation done by completeFuture() (not much more than a pointer increment!) is more expensive than this hash lookup? Is it so much more efficient that it's worth making the code more complex?
Is this documented anywhere? It's not at all obvious to me whether this approach is supposed to work or not - in Seastar it wouldn't have worked. I found a package https://github.com/jmnarloch/completable-future-cache that also uses this idea of returning a multiple reference to the same CompletableFuture object ("all of them will gain access to the same instance of CompletableFuture that they can observe for completion") so I guess you are correct, and it's not a wrong approach. I still wonder if it's worth it. Could you do me a favor and try some sort of loop or something to measure if completedFuture() is really as slow as you assume it to be?
Java has no "deallocation" of memory. It uses a generational GC, where allocation is O(1) but very efficient (basically, just increment a pointer) and deallocation is, believe it or not, less than O(1) - i.e., deallocation is (more-or-less) free, it is the memory that is not deallocated - i.e., lives for a significant amount of time - which takes cycles. So I doubt the completedFuture() function is inefficient as you think it is. It might even be a tiny bit faster than the hash-table lookup you replaced it with... And certainly easier to understand. |
It is not about deallocation per say, it is more about sheer amount of objects that GC needs to track, less objects to track, less problem GC has. |
We recreate future on every request, there is no reason to do so.