Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tests: unit tests do not check the outputs of models #53

Open
Mayukhdeb opened this issue Apr 2, 2021 · 2 comments
Open

tests: unit tests do not check the outputs of models #53

Mayukhdeb opened this issue Apr 2, 2021 · 2 comments

Comments

@Mayukhdeb
Copy link
Member

While the unit tests do cover the output types, they do not check if the outputs themselves are correct/incorrect.

To do:

  • Use some loss function w.r.t ideal outputs and make sure the losses are low enough
  • Make sure the output dtype matches the original (int, float32, float64)
@vrutikrabadia
Copy link
Contributor

Hello @Mayukhdeb, for the lineage population model we can calculate loss from the ground truth population labels from the CSV. But, I don't understand how can we test the output of the segmentation model. One way I think could be running the model on the same image multiple times and calculating the mutual loss between them. If that would be helpful, I wish to work on the issue.

@Mayukhdeb
Copy link
Member Author

Hi @vrutikrabadia.

For the segmentation model, I plan to execute the testing procedure by saving a few .npy files which would then be compared with the predictions with np.allclose().

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants