-
Notifications
You must be signed in to change notification settings - Fork 599
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix pending evaluation also being listed as completed #5194
Conversation
WalkthroughThe changes in this pull request modify the Changes
Possibly related PRs
Suggested reviewers
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (1)
fiftyone/operators/builtins/panels/model_evaluation/__init__.py (1)
Line range hint
359-363
: Consider using more specific exception handlingWhile the current implementation works correctly, it could be improved by catching specific exceptions rather than using a broad except clause.
Consider this improvement:
def has_evaluation_results(self, dataset, eval_key): try: return bool(dataset._doc.evaluations[eval_key].results) - except Exception: + except (KeyError, AttributeError): return FalseThis makes it clearer which error conditions we expect and handle.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
📒 Files selected for processing (1)
fiftyone/operators/builtins/panels/model_evaluation/__init__.py
(1 hunks)
🔇 Additional comments (1)
fiftyone/operators/builtins/panels/model_evaluation/__init__.py (1)
83-88
: LGTM: Proper filtering of evaluations with results
The conditional check effectively prevents pending evaluations from being incorrectly listed as completed by only including evaluations that have associated results.
What changes are proposed in this pull request?
fix pending evaluation also being listed as completed
How is this patch tested? If it is not, please explain why.
Using the model evaluation panel.
Release Notes
Is this a user-facing change that should be mentioned in the release notes?
notes for FiftyOne users.
(Details in 1-2 sentences. You can just refer to another PR with a description
if this PR is part of a larger change.)
What areas of FiftyOne does this PR affect?
fiftyone
Python library changesSummary by CodeRabbit
These changes refine the user experience by ensuring that the evaluation panel presents only relevant data, thereby increasing clarity and usability.