Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Calling cache.gc() might cause memory usage increase and performance impact since v3.9 #12360

Open
tgohn opened this issue Feb 10, 2025 · 4 comments

Comments

@tgohn
Copy link

tgohn commented Feb 10, 2025

Issue Description

Descriptions:

Since @apollo/client v3.9, calling client.cache.gc() increase memory usage in various cache subsequently.
Some affected cache are:

  • inMemoryCache. executeSelectionSet
  • inMemoryCache. executeSubSelectedArray
  • client.cache.addTypenameTransform.resultCache

This behaviour did NOT occur in v3.8 .

Also, I think there is performance impact here as well when trying to re-read previous gql query document, after calling cache.gc()
This happens because subsequent usage of cache.storeReader.executeSelectionSet (optimism.wrap) will result in cache miss.

Some other notes:

When doing git bisect on apollo-client repo, the first commit introducing this behaviour is bd26676

From reading the code, I think by reseting the cache in inMemoryCache.addTypenameTransform (source), it produces different cache key for the same gql query subsequently.

The chain is something like this:

Hope this make sense.

Link to Reproduction

https://codesandbox.io/p/devbox/apollo-3-9-memory-leak-public-xknp7y?file=%2Fsrc%2FApp.tsx&workspaceId=ws_UB3koDt7PAYGsj7fRJouXc

Reproduction Steps

Reproduction steps:

  • Click on "Toggle Show Location & cache.GC()" multiple times
    • this button mounts/unmounts a simple React component that use useQuery() hooks (Code copied from get-started doc)
    • this button also triggers apolloClient.cache.gc() at the same time
  • Observe the various cache size keeps increasing. For example:inMemoryCache.executeSelectionSet.size
    • This behaviour started since v3.9.0
    • in v3.8, theinMemoryCache.executeSelectionSet.size stays at 6

Reproduction screenshot:

Image

@apollo/client version

3.12.11

@phryneas
Copy link
Member

Hmm.
It's expected that this will end up with a new identifier - but it's not expected that any references to the old one will stay in use for long, unless they're actively used - and in your reproduction, they are not actively used, so I have to admit that I'm a bit at a loss.

I'll have to dig deeper into this, but also appreciate any further insights you might come up with.

@tgohn
Copy link
Author

tgohn commented Feb 10, 2025

I would need to dig deeper into the code as well, to understand where the inMemoryCache.executeSelectionSet (optimism.wrap) cache was pruned.

But this could be related to timing issue where:

  • cache.gc() was called
  • useQuery hook subscription.current.unsubscribe() was deferred by setTimeout (source)
  • hence when subscription.current.unsubscribe() was called, it would result in cache miss when trying to delete old entry in optimism's strong cache (screenshot below)

Screenshot:

Image

@phryneas
Copy link
Member

phryneas commented Feb 10, 2025

Okay, I've dug into this - this is actually not an unbounded memory leak, but expected behaviour.

First thing, if this is disturbing you, you can reset everything by calling

client.cache.gc({ resetResultCache: true });

which will also reset/recreate the executeSelectionSet cache.

But that said, this is kinda expected behaviour given what the executeSelectionSet cache is: it's a rotating cache with a maximum size.
Having this cache "grow" is something we don't consider a "memory leak" unless it would grow beyond it's maximum size.

You can see this cache like cache management in an operating system like Linux - when you look at memory usage, it will be constantly growing, but a lot of it is labelled as "cache", not "memory usage" - if memory usage reaches a certain threshold, that memory is collected, but before that it's held in case it might be useful in the future.

We don't actively remove data from it before it grows full.
We might change it to a "weak cache" implementation in the future, but that would be a breaking change for environments where weak caches don't exist. (which would be fine), but I believe I deliberately didn't do so in the future when faced the choice. I'll have to try hard to remember my reasoning from back then 😆
Either way, to track this and experiment with it, I've opened #12361 - which might or might not help here.

If you want to limit how big this cache can grow, I recommend you set a smaller cache size.

PS: I'm kinda curious - in what environment are you using this that memory size is of a concern to this level? Note that on servers you shouldn't have ApolloClient instances that exist for longer than one user request.

@tgohn
Copy link
Author

tgohn commented Feb 10, 2025

Thank you for the quick turn around.
And agree with the distinction between unbounded memory leak v.s. max cache size.

PS: I'm kinda curious - in what environment are you using this that memory size is of a concern to this level

For context:
We have a Single Page App that is:

  • using apollo-client v3.8.
  • we use cache.evict() follow by cache.gc() to manually remove un-needed data from apollo cache.

When we tried to upgrade the client to v3.9, we saw increase in browser's usedJSHeapSize and other UI perf metrics worsen.
Hence I was looking into what has changed between v3.8 and v3.9 in term of memory.

Also, historically, we has been configuring the apollo-client with resultCacheMaxSize: Infinity due legacy reasons 😬 :

I will get back to the team with your suggestions and see how best to proceed forward in our case.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants