You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Oct 26, 2024. It is now read-only.
UPDATE: See the suggestion at the bottom, check lambda minus epsilon and if it's different, raise lots of warnings.
tuning for a model with discrete test statistics is kinda scary. playing with a simple binomial model with n=10 so that the possible test stats are i / 10 for i in 0…10. the tuning code does the correct thing and there are no errors yet but i just wanted to bring this up…
example: suppose that the selected lambda** is 0.2. well, 0.2 is not precisely representable in floating point. the actual output from the code is 0.19999999999999996. if we use this threshold, the the type I error control turns out to be ~1%. but if we accidentally go use 0.2 as the threshold, then suddenly our type I error control is ~6% because there are lots of ties right at 0.19999999999999996. this is kinda scary and feels like the kind of thing that is going to bite us in the ass sometime.
potential ideas:
subtract a small value from the final tuning threshold?
ignore this?
The text was updated successfully, but these errors were encountered:
I ran into this issue again when I flipped between two different implementations of the same model. The two implementations were identical up to one or two ulps. But, that was enough to matter. Subtracting $\varepsilon$ from $\lambda^{**}$ would have completely solved the problem.
tbenthompson
changed the title
Discrete tuning is scary
Document "Discrete tuning is scary"
Feb 28, 2023
tbenthompson
changed the title
Document "Discrete tuning is scary"
Implement discretization checker.
Feb 28, 2023
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
UPDATE: See the suggestion at the bottom, check lambda minus epsilon and if it's different, raise lots of warnings.
tuning for a model with discrete test statistics is kinda scary. playing with a simple binomial model with n=10 so that the possible test stats are
i / 10 for i in 0…10
. the tuning code does the correct thing and there are no errors yet but i just wanted to bring this up…example: suppose that the selected lambda** is 0.2. well, 0.2 is not precisely representable in floating point. the actual output from the code is
0.19999999999999996
. if we use this threshold, the the type I error control turns out to be ~1%. but if we accidentally go use0.2
as the threshold, then suddenly our type I error control is ~6% because there are lots of ties right at0.19999999999999996
. this is kinda scary and feels like the kind of thing that is going to bite us in the ass sometime.potential ideas:
The text was updated successfully, but these errors were encountered: