Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enzyme AD - almost work! #176

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open

Enzyme AD - almost work! #176

wants to merge 1 commit into from

Conversation

GiggleLiu
Copy link
Collaborator

No description provided.

code::Const,
xs::Duplicated, ys::Duplicated, sx::Const, sy::Const, size_dict::Const)
xval = EnzymeRules.overwritten(config)[3] ? tape : xs.val
for i=1:length(xs.val)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this should probably also be xval

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They should be the same, no?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if it's a tuple, yes becaise it's immutable. If it's an array someone might've pushed/pop'd to it in between forward and reverse pass

xs::Duplicated, ys::Duplicated, sx::Const, sy::Const, size_dict::Const)
xval = EnzymeRules.overwritten(config)[3] ? tape : xs.val
for i=1:length(xs.val)
xs.dval[i] .+= OMEinsum.einsum_grad(OMEinsum.getixs(code.val),
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is there a reason for doing the for loop here, can this just be broadcasted for all of them? Or even more ideally could the dval be an extra argument to einsum/einsum_grad?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here xs is a tuple of arrays, here einsum_grad tries to get gradient of the i-th input tensor. You are right, it is better to design an inplace version for better performance. I will consider making this change in the future.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants