-
Notifications
You must be signed in to change notification settings - Fork 256
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to specify that InvalidResultException is expected result #220
Comments
Your question is raised in relation to the Today, the answer is that it's not supported, because the designers of the
We can design and implement something ourselves, and if it proves to be successful, submit it to DMG.org's consideration. The simplest design would be "If the evaluation of a data record is expected to fail with an error, then the data record must not define any expected target or output values". Successful data record (inputs plus targets and outputs): <row>
<petal_x0020_length>1.4</petal_x0020_length>
<petal_x0020_width>0.2</petal_x0020_width>
<sepal_x0020_length>1.4</sepal_x0020_length>
<sepal_x0020_width>0.2</sepal_x0020_width>
<PredictClass>Iris-setosa</PredictClass>
<Iris-setosa_x0020_Prob>0.62</Iris-setosa_x0020_Prob>
<Iris-versicolor_x0020_Prob>0.30</Iris-versicolor_x0020_Prob>
<Iris-virginica_x0020_Prob>0.08</Iris-virginica_x0020_Prob>
</row> Failing data record (only inputs, no targets or outputs): <row>
<petal_x0020_length>1.4</petal_x0020_length>
<petal_x0020_width>0.2</petal_x0020_width>
<sepal_x0020_length>1.4</sepal_x0020_length>
<sepal_x0020_width>0.2</sepal_x0020_width>
</row> A more complicated design would involve extending the |
The indication of a failing test should be rather generic (eg. a boolean flag), it must not link to a particular error class (eg. For example, the |
It's not possible using Workflow:
The conflict object can store at most one exception. So, if there are multiple invalid input field values, it will only "capture" the first invalid value, and ignore the rest. Ideas for improving the batch testing framework:
|
Just opened an JPMML-Evaluator 1.6.X development branch. That's a good opportunity to work on the identified "structural problems". |
I've requested DMG.org to clarify their position on this: |
Some more thoughts about #220 (comment) The Right now the integration testing facility only reports failures. The conflict object maintains an integer row number as the |
This really sounds great, and exactly what I need! However, I'm not a Java developer and cannot write Java code myself. Please let me know if there is any other way that I can be of assistance. |
I'll probably extend the JPMML-Evaluator 1.5.X development branch a bit, and implement this "if the verification data record does not specify any target or output values, then assume that it must raise an error" there. All the remaining batch integration testing functionality then goes into the 1.6.X development branch, because there will be breaking API changes.
The integration testing suite will also be re-modularized and re-factored to enable no-code/low-code setups. For example, there will be easy runners for asserting that
Simply keep raising thought-provoking issues. Anything about real-life data science workflows, where (J)PMML seems like a good technical platform (something to the tune of "almost works now, but could be improved in such and such ways"). |
I added model inputs and expected results to the Model Verification part of my PMML file. This works fine.
However, I would also like to test the model response for invalid model inputs. How do I do that?
I tried leaving the outputfield elements from the model verification row empty. I also tried removing them.
When I evaluate the model I keep getting an
InvalidResultException
and then the evaluation of the model halts.What I would like to achieve, is that the invalid result is the expected result and that the evaluation of the model continues.
Is this possible?
The text was updated successfully, but these errors were encountered: