Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reducing memory usage #112

Open
ddundo opened this issue Apr 22, 2024 · 2 comments
Open

Reducing memory usage #112

ddundo opened this issue Apr 22, 2024 · 2 comments
Labels
optimisation An opportunity to optimise performance
Milestone

Comments

@ddundo
Copy link
Member

ddundo commented Apr 22, 2024

@stephankramer @jwallwork23 could you please take a look at the simple example below? I tried to put it together to show the memory usage by the process before and after adapting the mesh. It really piles up when we have multiple meshes.

Edit: I cleaned up the example. Based on this I think there really is a memory leak. I am investigating but I've never used memory profilers before so it might take a bit. I will bring it up on the Friday meeting.

import gc, os, psutil
from firedrake import *
from animate.metric import RiemannianMetric

process = psutil.Process(os.getpid())
def mem(i): print(f"Memory usage {i}: {process.memory_full_info().uss / 1024**2:.0f} MB")

def get_metric():
    mesh = UnitSquareMesh(1000, 100)
    x, y = SpatialCoordinate(mesh)
    f = Function(FunctionSpace(mesh, "CG", 1)).interpolate(cos(x*y))

    P1_ten = TensorFunctionSpace(mesh, "CG", 1)
    metric = RiemannianMetric(P1_ten)
    metric.set_parameters({"dm_plex_metric_target_complexity": 10000})
    metric.compute_hessian(f)
    metric.normalise()
    
    return metric

mem(0)
metric = get_metric()
mem(1)

gc.collect()
mem(2)

del metric
gc.collect()
mem(3)

This prints:

Memory usage 0: 117 MB
Memory usage 1: 790 MB
Memory usage 2: 508 MB
Memory usage 3: 508 MB
@ddundo ddundo added the optimisation An opportunity to optimise performance label Apr 22, 2024
@ddundo
Copy link
Member Author

ddundo commented Apr 25, 2024

My workaround so far: use subprocess to adapt and checkpoint adapted_mesh. Then load it in the original script. This works well but it's a pain. But it might help the investigation of what we can get rid of and get this sorted properly.

Checkpointing the adapted_mesh above and then loading it in a separate script:

import os
import psutil
from firedrake import *

process = psutil.Process(os.getpid())

ram_usage = lambda: print("RAM usage", process.memory_info().rss / 1024**2)

ram_usage().  # 201 MB
with CheckpointFile(..., "r") as afile:
    adapted_mesh = afile.load_mesh("adapted_mesh")
ram_usage().  # 227 MB

@ddundo
Copy link
Member Author

ddundo commented Apr 28, 2024

And an even smaller example, with just firedrake:

import gc, os, psutil
from firedrake import *

process = psutil.Process(os.getpid())
def mem(i): print(f"Memory usage {i}: {process.memory_full_info().uss / 1024**2:.0f} MB")

mem(0)

mesh = UnitSquareMesh(1000, 100)
x, y = SpatialCoordinate(mesh)
f = Function(FunctionSpace(mesh, "CG", 1)).interpolate(x*y)
mem(1)

gc.collect()
mem(2)

del f, x, y, mesh
gc.collect()
mem(3)

Which prints

Memory usage 0: 117 MB
Memory usage 1: 210 MB
Memory usage 2: 210 MB
Memory usage 3: 210 MB

And with a finer mesh UnitSquareMesh(1000, 1000) I get

Memory usage 0: 117 MB
Memory usage 1: 1024 MB
Memory usage 2: 1024 MB
Memory usage 3: 181 MB

which is completely unintuitive to me.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
optimisation An opportunity to optimise performance
Projects
Development

No branches or pull requests

2 participants