-
Notifications
You must be signed in to change notification settings - Fork 229
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable sorting when batching is enabled #355
Conversation
graphene_sqlalchemy/batching.py
Outdated
# Cache this across `batch_load_fn` calls | ||
# This is so SQL string generation is cached under-the-hood via `bakery` | ||
# Caching the relationship loader for each relationship prop. | ||
RELATIONSHIP_LOADERS_CACHE = {} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We store the loaders per relationship so each item accesses the same cached result ensuring that the selectin is only triggered once
graphene_sqlalchemy/batching.py
Outdated
) | ||
RELATIONSHIP_LOADERS_CACHE[relationship_prop] = loader | ||
else: | ||
loader.loop = get_event_loop() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not 100% sure about this. Do any of you see issues with this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we need the get_event_loop() call in the else part? Isn't that initially covered by aiodataloader
already, and shouldn't the cache also get a full reset if the event loop changes (implying thread restart/change)?
aiodataloader
recommends to create a new loader for each request, this would keep the loaders between different queries, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Makse sense, I changed it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
September update: There is also preliminary work in Graphene to vendor the DataLoader
from aiodataloader
into Graphene, and also change it's __init__
behavior to not ask for an even loop so early on init. Just warning because that might touch the same points we are touching here.
) | ||
|
||
|
||
class ArticleReader(Base): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Had to add some deeper nesting in the test data for more robust tests
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added a Tags
class in #357. Will see if there's a way to consolidate over there after this is merged.
batching = True | ||
connection_class = Connection | ||
|
||
class Query(graphene.ObjectType): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As mentioned, the original test schema defines custom resolve_ methods which I find odd, but maybe people use it like that. We use it like this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also using it the way you suggest it, LGTM. Original schema doesn't use relay - without the relay connection field classes your resolvers don't automatically generate the SQA-Queries.
) | ||
session.add(reporter_1) | ||
reporter_2 = Reporter( | ||
first_name='Reporter_2', | ||
first_name='Reporter_2', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
unfortunately there were some flake8 errors I had to fix, so ignore these indentation changes
Changing this to |
…pLoader between queries
…re the enum will get the actula field_name
I still need to investigate those failing tests |
Codecov Report
@@ Coverage Diff @@
## master #355 +/- ##
==========================================
+ Coverage 96.37% 96.50% +0.12%
==========================================
Files 9 9
Lines 773 801 +28
==========================================
+ Hits 745 773 +28
Misses 28 28
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. |
The tests pass now, the remaining issue was that sqla1.2 is creating slighlty different select statments so my asserts where not working there. This is ready from my side now, please give it another look. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tested this locally. Working nicely!
I suggested a few minor adjustments. Please check my comments.
batching = True | ||
connection_class = Connection | ||
|
||
class Query(graphene.ObjectType): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also using it the way you suggest it, LGTM. Original schema doesn't use relay - without the relay connection field classes your resolvers don't automatically generate the SQA-Queries.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Aside from Erik's comments, all LGTM! Let me know if you'd like me to write a test case for custom ORMField
attributes.
) | ||
|
||
|
||
class ArticleReader(Base): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added a Tags
class in #357. Will see if there's a way to consolidate over there after this is merged.
It would be great @sabard, if you could write the test for the potential issue you have in mind. If it fails I will look into it and work on a solution. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like all works well for custom ORMField
names.
Off topic, but is sorting supported on non-Relay-style batching? I wasn't able to get it to work; maybe deserves its own test if supported?
@PaulSchweizer @sabard are we ready to merge? |
This is a followup of our discussion from July 18th about batching and sorting.
I took a deeper dive into the batching feature and have a better understanding of it now.
Basically the core issue was, that the batching resolver did not take care of running the initial query. Instead, the initial query had to be run "manually" in a custom "resolve_" field as indicated in the tests:
graphene-sqlalchemy/graphene_sqlalchemy/tests/test_batching.py
Line 67 in dfee3e9
That of course bypassed the "get_query" function in the resolver and thus also made sorting "impossible".
Let me know what you think.