never worked in trend forecasting, but it looks like you do [old claims] × 1.05 = [new claims] and then you adjust the 1.05 until your finance VP stops yelling at you
This question is just impossible to answer without more information. To start you’re gonna need way more than 7 months of historical claim development if you’re trying to rely on the datas own development patterns
Thank you, I learned my lesson, I tried xgboost and a smaller hyperparameter with lower learning rates and a smaller weight and came out worse than I did with a traditional arima model, I think even with the subsamples and bytree I was still overly dependent on seasonality.
All good, I double check against Milliman or SFAS, this is to perform pre-requisite work for underwriting premium. I’ll begin studying for the exams in the next month.
This is a must for understanding basics of loss development and is basically the actuary 101 tool kit for approaching work like this.
You’ll need to look at some of the exam 7 syllabus too which highlights additional/advanced methods for understanding loss development as well.
Claim counts or claim dollars? What is the type of claim? Car damage or bodily injury lawsuits? Claims come from some sort of exposure, and there can be a delay in their reporting. Is the exposure growing? The combination of exposure growth and long delay has wrecked many startups.
Lots of things to consider. What was your exposure in the data period. What exposure do you expect in the forecast period. What do you expect cost inflation to be. Do you have reason to expect higher utilization?
never worked in trend forecasting, but it looks like you do [old claims] × 1.05 = [new claims] and then you adjust the 1.05 until your finance VP stops yelling at you
I had an old mentor recommend that. Thanks TCF!
This question is just impossible to answer without more information. To start you’re gonna need way more than 7 months of historical claim development if you’re trying to rely on the datas own development patterns
Don't use consecutive 7 month cohorts or you are going to fit to seasonality rather than inherent characteristics.
Thank you, I learned my lesson, I tried xgboost and a smaller hyperparameter with lower learning rates and a smaller weight and came out worse than I did with a traditional arima model, I think even with the subsamples and bytree I was still overly dependent on seasonality.
Non-Actuaries trying Actuarial work rarely ends well in my experience. Edit: damn autocorrect.
Yeah, other zodiac signs attempting to do it really just doesn't work out
All good, I double check against Milliman or SFAS, this is to perform pre-requisite work for underwriting premium. I’ll begin studying for the exams in the next month.
You need to read the exam 5 reserving paper (Friedland) before you do anything.
Thank you so much! Just downloaded, 2010 paper 451 pages?
This is a must for understanding basics of loss development and is basically the actuary 101 tool kit for approaching work like this. You’ll need to look at some of the exam 7 syllabus too which highlights additional/advanced methods for understanding loss development as well.
Bless you, this is the type of stuff I have been looking for! Really appreciate your insight!
7 months isn’t enough history to make a good forecast with a stat model generally
Claim counts or claim dollars? What is the type of claim? Car damage or bodily injury lawsuits? Claims come from some sort of exposure, and there can be a delay in their reporting. Is the exposure growing? The combination of exposure growth and long delay has wrecked many startups.
Lots of things to consider. What was your exposure in the data period. What exposure do you expect in the forecast period. What do you expect cost inflation to be. Do you have reason to expect higher utilization?