01 June 2009

Traditional Insurance Pricing - A Data Quality Perspective

As I said in my first blog, my formal training is as a pricing actuary. In traditional pricing, the emphasis is on projecting the overall price for insurance. This may either be at a portfolio level (e.g. all personal auto policies in the state of Missouri) or at an individual level (i.e. Liability insurance for a Fortune 500 company).

For property and casualty insurance, one common technique is to use past premium and claims data to project future claim costs. Once these future costs are determined, additional amounts for expenses, capital costs, profits and contingencies are added to the costs.

The additional components have their own data challenges, but today I am going to talk about the data quality considerations for projecting future claim costs. To begin, there are certain actuarial and/or statistical assumptions that I will start with. Sometimes these hold in the real world and sometimes they don't, but they are not central to the data quality discussion.

Assumptions:
  • There is a sufficient amount of data to be able to make a future projection.
  • The policies that were written in the past are expected to be similar to the policies written in the future.
  • The data is homogeneous
  • The data is not distorted by the existence of an unexpected number large claims (either too few or too many)
Adjustments:

Once you have the data from the past, there are several things that need to be done to the data. Usually the premium and claims data is aggregated by year. This is either by Policy Year or Earned/Accident Year. Policy Year is based on the effective date of the policy and Accident Year is based on when the premium is earned and the claims are incurred. The choice of which to use does not change the technique. It only changes the values of the parameters that are used to adjust the data.

There are three main adjustments made to the data. These adjustments are done for two main reasons. The first is to adjust the data so that it is on today's basis and the second is to project it to the basis of time period for which the future policies will be written.

First, over the time period of the data, the policies were written at many different rates. The goal of pricing using this method is to determine how much TODAY's rates need to be changed for the future period. Therefore, the premium from past year's needs to be adjusted so that they are equivalent to today's rates.

Second, when claims occur, there will usually be payments on many claims at the time of the analysis. Adjustments have to be made to the data to adjust the claims to their ultimate value.

Third, the claims (and sometimes the premium) need to be adjusted today's dollars. They then need to be adjusted to the period in which the policies will be written.

Loss Ratio:
After these adjustments are made, a loss ratio is calculated for each year by taking the claim costs divided by the premium. They are averaged over all of the years (either simple or weighted averages) and then compared to a target loss ratio. If the loss ratio is higher than the target loss ratio, then the price needs to be increased. If it is lower, the price needs to be decreased.

Data Quality Considerations
For traditional pricing, data is aggregated. Because of this there are two main data quality considerations:
  • Reconciliation - The premium and claims data that is used for pricing can be compiled in various ways. It can be from a Data Warehouse, Data Marts, directly from source systems or various reports. When management sees the results of the pricing analysis, they will compare this to data that they have seen from other reports. Therefore, the premium and claims should be reconciled to a report that management is familiar with. The purpose of this reconciliation exercise is not to match with the management reports. It is to be able to explain any differences that exist between the two.
  • Reasonableness - The actuary that is performing the analysis is usually someone who is very familiar with the product that he/she is pricing. Therefore, when the premium and claims are aggregated and when the adjusted loss ratio is calculated, the values should be within a range of reasonableness expected by the actuary.
Because the data is aggregated, many dimensions such as completeness, consistency, duplication, etc. are not always addressed in traditional pricing analysis. Many times the actuary will assume that these tests have already been conducted to the data beforehand and will trust the data as given.

In my next blog, I will be talking about a change in actuarial analysis that is taking place that leads us to look at the data in a whole new way and has required a change how the actuary considers data quality.

1 comment:

  1. Jeremy

    This is a great post. It gives a wonderful insight into the value of good quality information to a process we all wind up involved in (who doesn't buy insurance?)

    The impact of data quality on the aggregated data, and the risk that actuaries will assume the quality of the data to have been tested is a great example of how IQTrainwrecks happen.

    Finally - I'm intrigued to hear more about the changes that are requiring a shift in how data quality is considered.

    ReplyDelete