-
Notifications
You must be signed in to change notification settings - Fork 64
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
User defined development factor in BootChainLadder #50
Comments
This might be possible, but I am not sure this would make sense from a statistical point of view. It sounds too much like a fudge. |
Hi Markus. Apologies but I don't see why this would be a fudge. It is market best-practice (and also a main feature in many of the well known reserving software) to perform adjustments on the development factors (removing outliers, reducing the calibration history,...). Performing a bootstrap using the canonical factors when the "best-estimate" is computed using adjusted factors doesn't make a lot of sense. |
Hi Ludovic, |
The idea of using data to estimate the variability of a set of judgmentally
selected factors is basis of the paper I wrote with Bardis and Majidi that
extends the Mack/Murphy method in that situation. The paper is here
http://www.variancejournal.org/issues/06-02/143.pdf, Bardis' presentation
at a CAS meeting is here:
https://www.casact.org/education/spring/2013/handouts/Paper_2331_handout_970_0.pdf
I blogged about how to implement the technique with ChainLadder back in
2014 (
http://trinostics.blogspot.com/2013/07/implementing-clfm-with-chainladder.html
)
Ludovic appears to want to explore that possibility with the England's
Bootstrap method. We got pushback from some folks too. One should expect to
see a higher standard error of the prediction to the extent the selections
don't agree with the canonical factors, which is exactly what we found. If
that is the only dataset available for one's estimates, then it is nice to
have an algorithm that is able to respond with a scientifically based
result. Eventually we were able to convince our reviewers of the value of
our thesis.
…On Tue, Jun 19, 2018 at 9:15 AM, Markus Gesmann ***@***.***> wrote:
Hi Ludovic,
I am aware of the practice of selecting 'suitable' factors. However, it
appears to me more based expert judgment, i.e. which data to include or
exclude, rather than statistical, and therefore forcing a model to work
with a given data set, instead of selecting a suitable model for the data
at hand. Anyhow, that's more a philosophical point and what I would call
'fudging'.
Yet, I am happy to support you, if you would like to look into the
implementation of your idea.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#50 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AGKcB5ae07M_oCjuv-Qv-N3fByEXTZqXks5t-SO-gaJpZM4UtYrf>
.
|
Hi,
I think it would be very useful to add an option in the BootChainLadder function in order to allow the user to force development factors other than those stemming from a pure application of the chainladder method. Indeed the calibration of development factors encompasses a certain level of expert judgement which can give rise to user defined development factors. The bootstrap should then be based on those DF and not on the canonical ones.
Is this something possible ? Thanks.
The text was updated successfully, but these errors were encountered: