-
Notifications
You must be signed in to change notification settings - Fork 60
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Posterior methodologies with Random Forests #319
Comments
Hi! Could you clarify a bit what you mean by posterior methodologies and integration of them into ELFI pipeline? e.g. would you like to implement RF-ABC within ELFI? In this case Note that If you don't care about the threshold for rejABC and only would to generate a reference table from the ELFI-model, you can also set |
Hi, I'm working with J-M. Marin, and posterior RF methodologies like model choice and parameter estimation work directly on ABC reference tables, as stated in : By integration in ELFI, I originally mean to implement a new inference method like documented there. I am not sure about batch processing. RF-ABC prediction performance degrades a lot if you take only small subset of the data. I don't know either how to "accumulate" posterior results from successive batches any other than retraining a forest on all past batches, which of course defeats the batch's purpose. I think this is a perhaps use case for "mondrian" forests; not classical Breiman's rf like the ones we use, but mondrian forests (Lakshminarayanan, Roy, and Teh Threshold doesn't matter "much" with RF-ABC, but it doesn't mean we shouldn't have one, so I think ReferencesPudlo, Pierre, Jean-Michel Marin, Arnaud Estoup, Jean-Marie Cornuet, Raynal, Louis, Jean-Michel Marin, Pierre Pudlo, Mathieu Ribatet, Lakshminarayanan, Balaji, Daniel M Roy, and Yee Whye Teh. 2014. |
Summary:
Currently testing a python module wrapping https://github.com/diyabc/abcranger : posterior methodologies (model choice and parameter estimation) with Random Forests on reference table.
(See the references)
Description:
I would like to know the best way to integrate the posterior methodologies into the elfi pipeline. It seems any inference method in elfi should have an "iterate" method with every new sample, but both methodologies haven't got any (they need the whole reference table at once)
See the demos at :
https://github.com/diyabc/abcranger/blob/master/testpy/Model%20Choice%20Demo.ipynb
and
https://github.com/diyabc/abcranger/blob/master/testpy/Parameter%20Estimation%20Demo.ipynb
Note that the basic rejection sampler is more than enough with those methodologies (and the threshold parameter almost doesn't matter).
Regards,
The text was updated successfully, but these errors were encountered: