Does the joint model suffer from issues related to overfitting? #68
Replies: 1 comment 1 reply
-
Hi @ec-ho-ra-mos, Thanks for giving mokapot a try and the question! The short answer: No, using a joint model should result in no more overfitting than a standard Percolator/mokapot analysis. The long answer: The purpose of this approach is to be able to leverage data from multiple experiments, which may be small individually, to reduce the variance of the learned model. If each experiment is itself fairly large, the results from a joint model and modeling each experiment individually should converge. Note that the static modeling approach tackles a similar issue, but in a different context. It assumes we have a large amount of prior knowledge that we can use to build a model, which is often the case. We can then use the model learned from prior data to analyze new experiments without cross-validation, because there is no additional model fitting. Learning a model from large amounts of prior data also serves to reduce the variance of the learned model. |
Beta Was this translation helpful? Give feedback.
-
Hello!
Before asking my question, let me first run what I understand about the static model and the joint model from the mokapot JPoS paper.
My question now is, wouldn't this cause some kind of overfitting issue especially if the number of experiments used for training the joint model is not big enough?
Thank you very much for your attention to this question.
Beta Was this translation helpful? Give feedback.
All reactions