Skip to content

100% Normalization Uncertainty #462

Discussion options

You must be logged in to vote

Hi Tomoya, this happens due to the algorithm being used to perform extrapolations. For normalization uncertainties, the extrapolation is exponential ("code 4" in pyhf). This has the desirable property of never predicting negative yields as long as the yields at the up/down templates are larger than 0. If you have a variation that can decrease the normalization of a sample by 100%, the extrapolation breaks (technically I think there will be a division by zero causing issues).

What I am used to seeing in practice are solutions where the variation in down direction is set to something very close to 100%. You can then still pull the nuisance parameter beyond the [-1, 1] interval and get physi…

Replies: 1 comment 1 reply

Comment options

You must be logged in to vote
1 reply
@Tomoya-Iizawa
Comment options

Answer selected by Tomoya-Iizawa
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants