-
-
Notifications
You must be signed in to change notification settings - Fork 461
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RFC: Query batching support #800
Comments
I personally believe that batching with Apollo's supported method is not a good option. Fundamentally, Apollo keeps writing additional extension specs, and this one in particular I believe goes against the spirit of GraphQL and isn't actually adding a lot of benefits.
True batching would mean that we'd combine queries into a single GraphQL document and then separate the combined result from the API back into separate requests. In theory this could even be achieved on a Graphcache level or alternatively on a separate exchange. This approach makes it clear that batching isn't that beneficial on the front end either. At that point we've either batched queries on a single tick into one, which will be mounting queries, which can be hoisted manually or don't have to be combined (see prior arguments for server-side) or they're unrelated queries if we batched on a timer, and batching them together has no noticeable speed-up effect, given a performant server setup, or can even decrease the responsiveness of the client-side load, e.g. a "fast auto-suggest search query" and a slow page query at the same time. |
I agree with you that Apollo batching is quirky and I would advice everyone against using it as well you do. The points you have are completely true for 99% of cases. Our is bit weird and on bigger scale than usual though.
This amount of seperate request on single page load creates pressure on device, network, and the server. Device performance is very sensitive topic for us as we operate on emerging markets. About network I'm not so worried because of HTTP/2 but the biggest cost is server. Every query comes with fixed costs performance cost eg. Auth, Geolocation but also money as every request through CDN costs something and if you multiply this by tens of millions MAUs it is significant amount. Also when we use multiple queries it lands on multiple server instances so server can't do the batching to following micro-services & databases. We could solve that by manually hoisting the movie We can and we will solve it no matter how this RFC ends but I just wanted to open discussion on this topic and maybe get some ideas that are better then mine. |
That’s interesting! I agree that this amount of requests would most definitely be unacceptable. Either way, I’m sure we can solve this, but ultimately we may run out of interactions between all modes of fetching. Currently we have five (?) different modes of requests. So I can see how in some cases there are definitely going to be arguments for some kind of batching, but there’s two many variables right now, since we have so many different modes of fetching, so it’s looking a little fuzzy right now 🙃 |
The reason why this N+1 appears on client is the fact that the initial query is public and cacheable but the N queries are not. So if we want to have fast initial load we have to load them separately. |
I've played with that and managed to get In theory it should be possible to implement it without dataloader just using wonka functions but I'm not very familiar with them and didn't manage to get it working. I noticed one bug though and I've created separate issue for it. If you pass |
Hiya 👋 Sorry to be coming back to you on that this late. I know that in your case it makes sense, and the only alternative would be for you to use Instead, we may want to write a recipe and merge this issue basically into #815. How does that sound? |
That is fair and I fully understand your reasons. The recipe will be great for anyone who will need to solve that. |
@kitten Can you elaborate on what this means? "It's common practice to hoist queries per page" Does this mean something like having a higher order component (at the page level) which uses the I couldn't find much about "hoisting queries". Any help would be appreciated! |
@jesselee34 there's not really any magic to it, but a page doesn't really need separate queries per section. Instead, it often makes most sense to define fragments per structural components. Each component can define its own fragment and those can be composed together upwards until your page only has one or few queries overall to run. Relay does champion this approach a lot and makes it its standard pattern. We're currently working on some tooling as well to encourage this on top of GraphQL Code Generator. Overall, given some level of caching I wouldn't worry about batching. Sending multiple API requests is often even better than batching them into one, especially since it helps CDN-level caching. And if you run into problems where multiple queries depend on each other then you may have a schema that doesn't cater for your app perfectly and those queries cannot be batched either way. Hope that clarifies it ✌️ |
In the cases where you do desire batching, like my case (where we do not yet support http2, don't have enough control to hoist all the queries, don't desire graphql http caching and have a backend and database that can parallelize queries more efficiently within a request than over multiple requests.) I found I needed something a bit different than @jakubriedl's POC and submit my own. This version works on non persisted queries. |
We don't want to explicitly support Query-batching (however a third party can implement this). We will implicitly support this in the UI with Fragment boundaries #1408 |
Summary
Apollo client support batching queries which is useful when many small components fetch at once. It reduces network overhaul and also allows server to avoid n+1 issue if implemented correctly.
Proposed Solution
It could be new exchange that would batch queries together, possibly using Dataloader.
Requirements
The behaviour should be similar to Apollo, allow enable batching on query level and should work correctly with persisted queries (avoid using persisted queries when batched).
The text was updated successfully, but these errors were encountered: