-
Notifications
You must be signed in to change notification settings - Fork 179
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chore(api, performance-metrics): clean up performance-metrics tracking #15289
chore(api, performance-metrics): clean up performance-metrics tracking #15289
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## edge #15289 +/- ##
==========================================
+ Coverage 63.20% 63.75% +0.54%
==========================================
Files 287 300 +13
Lines 14891 15491 +600
==========================================
+ Hits 9412 9876 +464
- Misses 5479 5615 +136
Flags with carried forward coverage won't be shown. Click here to find out more.
|
d64072a
to
b73010f
Compare
missed something
40d1d64
to
70017cd
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Almost there, I think we can slightly improve the analysis tracker and then we're good!
Sorry if this was answered in prior discussions, but does any of this need to be in Since if TYPE_CHECKING:
import performance_metrics
def _handle_package_import() -> "typing.Type[performance_metrics.SupportsTracking]":
...
package_to_use: "typing.Type[performance_metrics.SupportsTracking]" = _handle_package_import() When I think of what belongs in shared-data, I think of things that are used across languages and all up and down the stack, like the JSON protocol schemas. |
@SyntaxColoring, your explanation of what goes in |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM if Seth's comments are addressed.
Here are some typing-related comments. I don't think any of these, nor the shared-data
organizational thing above, need to be done in this PR.
Thank you!
UnderlyingFunctionParameters = typing.ParamSpec("UnderlyingFunctionParameters") | ||
UnderlyingFunctionReturn = typing.Any | ||
UnderlyingFunction = typing.TypeVar( | ||
"UnderlyingFunction", | ||
typing.Callable[..., UnderlyingFunctionReturn], | ||
typing.Callable[..., typing.Awaitable[UnderlyingFunctionReturn]], | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is Python's fault for being confusing, but for a few layered reasons, I don't think this is working as intended.
The red flag is that in robot_context_tracker.py
, the type-checker is seeing the func_to_track()
calls as returning Any
. You're currently solving that with a pair of cast
s, but if everything is working properly, that shouldn't be needed.
There are probably a few ways to fix this, but as a first step, I would do this:
UnderlyingFunctionParameters = typing.ParamSpec("UnderlyingFunctionParameters")
UnderlyingFunctionReturn = typing.TypeVar("UnderlyingFunctionReturn")
UnderlyingFunction = typing.Callable[UnderlyingFunctionParameters, UnderlyingFunctionReturn]
UnderlyingFunctionReturn
goes fromAny
toTypeVar
to retain the return type of the wrapped function. This is what fixes thecast
problem.UnderlyingFunction
becomes a plainCallable
alias instead of aTypeVar
because its arg type is aTypeVar
now, and I guess you can't have nestedTypeVar
s. (This might be Higher-Kinded TypeVars python/typing#548, but that discussion is mostly over my head.)
Then, in the places using these type aliases, I would do this:
def track(
self,
state: RobotContextState,
) -> typing.Callable[
[UnderlyingFunction[UnderlyingFunctionParameters, UnderlyingFunctionReturn]],
UnderlyingFunction[UnderlyingFunctionParameters, UnderlyingFunctionReturn]
]:
- Any mention of
UnderlyingFunction
becomesUnderlyingFunction[UnderlyingFunctionParameters, UnderlyingFunctionReturn]
because it's aCallable
alias now and so its type parameters need to be specified.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Separately: I would consider not centralizing these aliases. Instead, maybe mark them as private (like _UnderlyingFunctionReturn
) and redeclare them wherever they're needed.
Importing and sharing TypeVar
s can confuse people because, although they're declared in one central place, the type-checker evaluates them in the local context of wherever they're used. Things can get especially confusing when there's subclassing involved, like SupportsTracking
in this PR. The base class's type parameter is not necessarily the same as the subclass's type parameter, even though they're the same TypeVar
. I think there's some broken typing in opentrons.protocol_api.core
because of stuff like this.
This is, again, Python's fault for being confusing. No other language works like this as far as I know. Thankfully, Python 3.12 fixes it by replacing TypeVar
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me although I'm a little surprised to see that annotation on _do_analyze
disappear
@@ -198,7 +197,6 @@ def _get_return_code(analysis: RunResult) -> int: | |||
return 0 | |||
|
|||
|
|||
@track_analysis | |||
async def _do_analyze(protocol_source: ProtocolSource) -> RunResult: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
wait wasn't this the point of the exercise? or are you doing this in a followup now
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, in a follow up. Since I am not actually installing this on a robot yet I am deferring it to when I can verify if it is actually working or not
Overview
I originally had all my work in this PR then got pulled to other projects. 2 months later, the PR is ~250 commits behind edge and is a nightmare to get in sync. I created a new branch off of edge and cherry-picked pertinent things.
This PR covers:
Test Plan
Changelog
track_analysis
decorator on CLI analyze. That analysis does not run on the robot so we don't care about tracking itReview requests
Risk assessment
Low