-
Notifications
You must be signed in to change notification settings - Fork 115
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reduce overhead of function call and deprecate rarely used utilities #1024
Conversation
4e0e6d5
to
25147b8
Compare
@@ -128,9 +128,6 @@ def fiter_variable(self, other): | |||
" a symbolic placeholder." | |||
) | |||
|
|||
def may_share_memory(a, b): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removing this avoids having to check for aliasing in the first place, during the function call
# and can't be aliased. | ||
if not ( | ||
isinstance(inp, In) | ||
and inp.borrow |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If it's mutable it must be borrow (this is asserted in In.__init__
) so no need to check for both as before. Also the attribute always exists.
if any( | ||
inp.variable.type.is_super(other_inp.variable.type) | ||
or other_inp.variable.type.is_super(inp.variable.type) | ||
for other_inp in group | ||
): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is more careful than the check before, which assumed a vector(shape=None) and vector(shape=(1,)) could not be aliased as their types were different, but they could. New test condition checks for this.
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #1024 +/- ##
==========================================
- Coverage 81.90% 81.89% -0.01%
==========================================
Files 182 182
Lines 47879 47840 -39
Branches 8617 8608 -9
==========================================
- Hits 39214 39180 -34
+ Misses 6492 6487 -5
Partials 2173 2173
|
# Group indexes of inputs that are potentially aliased to each other | ||
# Note: Historically, we only worried about aliasing inputs if they belonged to the same type, | ||
# even though there could be two distinct types that use the same kinds of underlying objects. | ||
potential_aliased_input_groups = [] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps this logic could be a utility function? It is reused almost identically below
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is reused almost identically? Also note that I want to remove this whole logic soon: #1026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Making groups from a list of elements based on an equivalence function
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe but the goal of the PR is to reduce the overhead of the function.__call__
which usually runs in a hot loop. Adding nested function calls may defeat the purpose. Perhaps I will just go ahead with #1026 and shed this whole logic.
I agree it's ugly to repeat the code
7a2d4aa
to
2faae23
Compare
23b4fe9
to
720ecbc
Compare
Marking as a draft because I want to add a note on the docstrings of pytensor.In about mutable and alias |
Moved the last commit to a separate PR: #1049 Want to simmer a bit on that before |
We have way too much function overhead call. This PR tries to remove some of it.
I used this simple one Op function as a test:
Note the largest speedup comes from removing the extra checks in RandomVariable.perform. But you can see some minor speedup in the fn eval beyond what's observed by calling the vm directly.
Before the PR:
After the PR:
Some of the changes would be more visible in a function with more inputs (but still relatively fast).
It also deprecates a bunch of obscure options that should allow us to remove a few more conditionals in the future.
Related to #552
📚 Documentation preview 📚: https://pytensor--1024.org.readthedocs.build/en/1024/