In 1989 (when I started my career as a pricing actuary), auto insurance rating plans were relatively simple. For example, most insurers only had a handful of age categories and territorial rating was done at the county level. Since then, insurers have invested millions of dollars in people, data collection and predictive modeling tools to build complex rating plans to help them charge the “right” rate for the risk, and avoid adverse selection at the hands of those who do.
Enter telematics. Telematics data gives actual data on how, how much, when and where people drive their vehicles. Because most of the traditional factors used in even the most sophisticated models are simple proxies for these driving decisions, telematics data offers a step-change improvement in rating accuracy – IF done right.
Why scores aren’t equal
Willis Towers Watson‘s analysis of data from our DriveAbility program confirms that a properly developed telematics score can be many times more powerful than any traditional factor. But we have also seen telematics scores that don’t provide much benefit beyond just using verified mileage. In at least one case, a basic score — which had a mileage component — was less predictive than verified mileage alone (meaning the analyst destroyed value by incorrectly identifying, quantifying and combining various driving behaviors).
So, why are some insurers settling for basic scores that may not improve their business? One explanation is that they don’t have the data and/or know-how to build a proper score. Another explanation is that they truly believe a basic score is “good enough”. Insurers going this route are betting on two things:
- Customers who volunteer for usage-based insurance (UBI) policies are better than average drivers, justifying discounts based on self-selection alone.
- Their competitors don’t have the access to telematics data, so they aren’t as concerned about being “out-priced”.
There are several reasons those bets may be misplaced.
The biggest thing that should scare insurers who use basic scores is the emergence of telematics data from the Internet of Things (IoT). Auto manufacturers (OEMs), telecommunication companies and others have telematics data on their customers that is comparable to the data being used by insurers for UBI – and are increasingly making them available to insurers via data and insurance exchanges. This will level the playing field between existing and new insurers vying for the business. It will also give those who most effectively optimize the use of the telematics data a significant competitive advantage over those relying on basic scores.
Even without data portablility, insurers who want a profitable UBI program should eschew the use of basic scores. Our analysis shows there is a significant difference in risk between the best and worst drivers in these programs, What’s more, many drivers who consider themselves “better than average” aren’t, and probably shouldn’t have volunteered. The profitability of a UBI program relies on an accurate assessment of individual risk so that the insurer can reward customers who really deserve it.
Using telematics scoring data to increase and improve customer interactions only works if the customer has a good experience and trusts the information being shared. For example, harsh braking is a common metric used in simple scores. While there is a correlation between the number of harsh brakes and the chances of having an accident, harsh brakes don’t cause accidents (but rather help drivers avoid them). In the best case, such a metric elicits comments like: “Oh, are you saying I should run over the dog that ran in the road?” In the worst case, it can cause drivers to change their behavior in a bad way. One driver we interviewed reported: “I have reduced my number of harsh brakes. I now run through the light rather than stopping abruptly on the yellow.” A good score should help drivers understand behaviors that cause accidents (or near accidents) to happen, so they can change that behavior.
And finally, beware of scores that are built judgmentally or based on publicly available data, even if developed analytically. Experience shows there is no quality guarantee and that it is very difficult to assess how good a score is until the insurer has collected sufficient data to quantitatively assess the score.
We recommend a few questions that will help assess the quality of a scoring algorithm:
- Did the analyst use granular telematics data or rely on the analysis of simple events and averages?
- Does the telematics score use useful external data to contextualize driving behaviors?
- Was the scoring algorithm developed using multivariate analysis with actual insurance claims experience associated with the vehicle/driver? And, were the claims from the same time period the telematics data was collected?
- Was the lift of scoring algorithm validated using a proper hold-out data set?
- Did the analyst include traditional factors to determine how the telematics score will affect current rating factors that may be proxies for actual telematics data?
- Is there sufficient actuarial justification to obtain approval for the filings in challenging regulatory states?
For more information on usage-based insurance and telematics scoring data, see “Powered by the IoT, auto insurance is poised for a revolution”.