You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In theory, it would also be possible to specify the learner which is used to learn the censoring model, tune the parameters, etc.... For now I'd restrict to simple parametric learner, e.g. coxph, but raises questions. E.g. what to do in n <p cases.
The text was updated successfully, but these errors were encountered:
Do we really want to do this? I know this is possible in {pec} but it has no good theoretical justification. How do you validate the censoring model? What about data bias which just gets propagated forward by the second learner. I'm not convinced this is something we should implement
Yes, vlaidation of the censoring model is a problem. But we should allow it should be able to
a) recreate results from literature
b) use it for comparison when they develop alternatives or similar
In theory, it would also be possible to specify the learner which is used to learn the censoring model, tune the parameters, etc.... For now I'd restrict to simple parametric learner, e.g.
coxph
, but raises questions. E.g. what to do in n <p cases.The text was updated successfully, but these errors were encountered: