Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Speed up GME by Fixing a Bug #71

Open
JeremieMelo opened this issue Aug 1, 2024 · 2 comments
Open

Speed up GME by Fixing a Bug #71

JeremieMelo opened this issue Aug 1, 2024 · 2 comments

Comments

@JeremieMelo
Copy link

In gme.py, _separate_hamiltonian_sparse function,

max_out_1 = bd.max(np.abs(out_diag_1))
max_out_2 = bd.max(np.abs(out_diag_2))

are super slow as the autograd abs and max are applied on autograd.numpy_array.ArrayBox.
However, they do not need gradients. So If changing them to numpy array,

if type(out_diag_1) == npa.numpy_boxes.ArrayBox:
    out_diag_1 = out_diag_1._value
    out_diag_2 = out_diag_2._value
max_out_1 = np.max(np.abs(out_diag_1))
max_out_2 = np.max(np.abs(out_diag_2))

It will be much faster without any functionality impacts.

@momchil-flex
Copy link
Collaborator

Thanks! This is interesting, I am surprised there is an overhead in this - it may be worth investigating a bit further what the underlying reason is. For the time being though do you have a simulation where you observe this inefficiency?

@JeremieMelo
Copy link
Author

JeremieMelo commented Aug 1, 2024

I just ran the standard PhC waveguide simulation and inverse design. The runtime report, when set verbose=True, shows this is the runtime bottleneck. So, I profiled every line in this function. Those two lines turned out to be unexpectedly slow, as the autograd.max function tries to select one element to propagate the gradient, which might be the slowest part.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants