Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Apollo Client + NextJS, getServerSideProps memory leak, InMemoryCache #9699

Closed
aaronkchsu opened this issue May 9, 2022 · 18 comments
Closed

Comments

@aaronkchsu
Copy link

Official Apollo and NextJS recommendations are about to create a new ApolloClient instance each time when the GraphQL request should be executed in case if SSR is used.

This shows good results by memory usage, memory grows for some amount and then resets with the garbage collector to the initial level.

The problem is that the initial memory usage level constantly grows and the debugger shows that leaking is caused by the "InMemoryCache" object that is attached to the ApolloClient instance as cache storage.

We tried to use the same "InMemoryCache" instance for the all new Apollo instances and tried to disable caching customizing policies in "defaultOptions", but the leak is still present.

Is it possible to turn off cache completely? Something like setting a "false" value for the "cache" option in ApolloClient initialization? Or maybe it's a known problem with a known solution and could be solved with customization of the "InMemoryCache"?

We tried numerous options, such as force cache garbage collection, eviction of the objects in the cache, etc., but nothing helped, the leak is still here.

Thank you!

export function createApolloClient(apolloCache) {
    return new ApolloClient({
        cache: apolloCache,
        connectToDevTools: !!isBrowser,
        link: ApolloLink.from([
            errorLink,
            authLink,
            createUploadLink({ credentials: "include", uri }),
        ]),
        ssrMode: !isBrowser,
        typeDefs,
        resolvers,
        defaultOptions: {
            watchQuery: {
                fetchPolicy: "cache-first",
                errorPolicy: "all",
            },
            query: {
                fetchPolicy: "cache-first",
                errorPolicy: "all",
            },
        },
    });
}

export function initializeApollo(initialState = {}) {
    const _apolloCache = apolloCache || createApolloCache();

    if (!apolloCache) apolloCache = _apolloCache;

    // console.log("APOLLO_CACHE", apolloCache);
    // apolloCache.evict();
    apolloCache.gc();

    const _apolloClient = apolloClient || createApolloClient(_apolloCache);

    // For SSG and SSR always create a new Apollo Client
    if (typeof window === "undefined") return _apolloClient;
    // Create the Apollo Client once in the client
    if (!apolloClient) apolloClient = _apolloClient;

    return _apolloClient;
}

export function useApollo(initialState) {
    const store = useMemo(() => initializeApollo(initialState), [initialState]);
    return store;
}

On page

export async function getServerSideProps(context) {
    const apolloClient = initializeApollo();
    const biteId =
        (context.query && context.query.id) ||
        (context.params && context.params.id);

    const realId = delQuery(biteId);

    try {
        await apolloClient.query({
            query: FETCH_BLERP,
            variables: { _id: realId },
            ssr: true,
        });
    } catch (err) {
        console.log("Failed to fetch blerp");
    }

    return {
        props: {
            _id: realId,
            initialApolloState: apolloClient.cache.extract(),
        },
    };
}
@Falco-Boehnke
Copy link

We have a similar issue with basically an identical setup. The response object keeps growing and growing, despite there being cached values they are always returned from the server.

@jeffminsungkim
Copy link

Having the same issue here.

@mcousillas6
Copy link

Same issue here, any updates from the team?

@bignimbus
Copy link
Contributor

Hi all, we are taking a fresh look at server-side rendering for release 3.8. See #10231 for more details and please feel free to provide feedback 🙏🏻 As we get into the implementation details we'll have a better sense of how to tackle memory management in this new paradigm. Thanks for your patience!

@aaronkchsu
Copy link
Author

aaronkchsu commented Dec 4, 2022

Awesome thanks for the update and for all the movement happening on the project!!

@samueldusek
Copy link

Hi, 🙂

Did anybody successfully solved the issue?

We are facing the same issue and we really need to solve this. 🙁

@y-a-v-a
Copy link

y-a-v-a commented Apr 6, 2023

Though I agree that memory consumption is very high, I wouldn't call this a memory "leak", it is just that, imho, unexpected behaviour is happening...

Judging from the documentation, one would expect to be able to limit the cache used by ApolloClient with the resultCacheMaxSize property. This appears to be a setting used by the optimism dependency. And when monitoring that cache, it indeed limits itself to the set amount of entries. But that doesn't seem the whole story.

Correct me if I'm wrong here, but ApolloClient uses a cache through the EntityStore, which relies on the Trie class of @wry/trie. And that's the one making our Node JS process run out of memory, as it doesn't seem to be limited anywhere and it doesn't seem to be gc'ed any time automatically. The EntityStore cache appears to store a full layout of the client's received responses as objects in instances of Trie-objects.
In our case we run an ecommerce website in NextJS with a lot of products for different locales, and all this data is stacked in the EntityStore's Root. This is very convenient, but consumes so much memory that the process dies after a couple of hours: When requesting a couple of thousand Product Detail Pages that were not pre-built by NextJS (or are invalidated at almost the same time), ApolloClient (running within the NextJS server) will fetch a lot of data and store it in the EntityStore.

I decided to go for a combination of configuration changes that seem to be related:

import { ApolloClient, createHttpLink, InMemoryCache } from '@apollo/client';

const link = createHttpLink({
  uri: process.env.GRAPHQL_ENDPOINT || 'http://localhost:5000/',
  credentials: 'include',
});

const cache = new InMemoryCache({
  resultCacheMaxSize: 10_000,
  typePolicies: {
    CommonData: {
      keyFields: false,
    },
    Cart: {
      keyFields: false,
    },
    Wishlist: {
      keyFields: false,
    },
  },
});

const apolloClient = new ApolloClient({
  ssrMode: typeof window === 'undefined',
  link,
  name: 'storefront',
  version: '1.0',
  cache,
  defaultOptions: {
    mutate: {
      fetchPolicy: 'no-cache',
    },
    query: {
      fetchPolicy: typeof window === 'undefined' ? 'no-cache' : 'cache-first',
      errorPolicy: 'all',
    },
    watchQuery: {
      fetchPolicy: typeof window === 'undefined' ? 'no-cache' : 'cache-first',
      errorPolicy: 'all',
    },
  },
});

And an additional setInterval to explicitly call gc on the cache:

setInterval(() => {
  cache.gc({ resetResultCache: true, resetResultIdentities: true });
}, 1_000 * 60 * 15);

This seems to mitigate the change of running out of memory due to huge amounts of instances of Tries.

FYI:

"@apollo/client": "3.7.11",
"next": "12.3",

@joekur
Copy link

joekur commented Oct 19, 2023

I too have been investigating a memory leak in an SSR context, and found retained objects that stick around even after manual garbage collection (via chrome devtools). These are strings and objects related to variables of a graphql query (client.query()). I followed these up retaining tree up to canonicalStringify, and noticed both retainers of WeakMap entries, and Trie.

What stuck out to me was that the Trie was using strong Maps underneath. I inspected one of these objects, and saw that while weakness was set to true on the Trie, it had a strong field (of type Map).

See here:
Screenshot_2023-10-19_at_11_12_02 AM-2
Screenshot 2023-10-19 at 11 21 54 AM

I haven't fully wrapped my head around how canonicalStringify is supposed to work (or @wry/trie), but this looks unexpected to me, and my assumption is that these strong Map instances are the reason they are not being GCed from the WeakMap and Trie references.

As mentioned in #9699 (comment), calling cache.gc() (on any InMemoryCache instance) clears this memory.

@jerelmiller
Copy link
Member

@joekur you might be interested in #11254 which did some work to address the memory overhead that canonicalStringify currently utilizes. That change was released in 3.9.0-alpha.2. Feel free to give that a shot and see if you get some better results!

@joekur
Copy link

joekur commented Oct 19, 2023

@jerelmiller was just giving that a look. Gave it a local test, and it does indeed look to fix the issue. Nice work! 👏

@joekur
Copy link

joekur commented Nov 1, 2023

@jerelmiller when can we expect a 3.9.0 release?

@phryneas
Copy link
Member

phryneas commented Nov 2, 2023

At this point we are targeting a release candidate late November/early December, and a final release about one or two weeks after that unless we get feedback that delays us.

@phryneas
Copy link
Member

Hey everyone!
We released the beta containing our memory story this week - you can read all about it in the announcement blog post.

We would be very grateful if you could try it out and report us your cache measurements so we can dial in the right default cache limits (more details in the blogpost) :)

@joekur
Copy link

joekur commented Jan 18, 2024

@phryneas any updated ETA on the 3.9.0 stable release?

@phryneas
Copy link
Member

@joekur we just shipped RC.1, and if nothing else comes up I would guess about a week.

@phryneas
Copy link
Member

phryneas commented Feb 6, 2024

We have recently released Apollo 3.9 which I believe should have fixed this issue - so I'm going to close this.

If this issue keeps persisting in Apollo Client >= 3.9.0, please open a new issue!

@phryneas phryneas closed this as completed Feb 6, 2024
Copy link
Contributor

github-actions bot commented Feb 6, 2024

Do you have any feedback for the maintainers? Please tell us by taking a one-minute survey. Your responses will help us understand Apollo Client usage and allow us to serve you better.

Copy link
Contributor

github-actions bot commented Mar 9, 2024

This issue has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.
For general questions, we recommend using StackOverflow or our discord server.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Mar 9, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests