How can insurers find real value in their predictive models? Experiment thoughtfully with practical implementation top of mind.

There’s a lot of conversation about new modeling approaches and novel sources of data poised to revolutionize insurance.

This extraordinary industry transformation actually began about a decade ago. Analytical methods such as generalized linear models (GLMs) and decision trees were combined with new data sources, including credit attributes and prior insurance history, to improve pricing and underwriting sophistication.

More recent developments, including vastly improved technology (e.g., hyper-scale computing and distributed storage), and an influx of new talent and availability of open-source programming languages and libraries, are providing even greater opportunities to explore what insights can be extracted from an increasingly wide array of data sources and formats.

Are these influences triggering a revolution or evolution in insurance analytics? And how can insurers find real value in their predictive models?

Revolution or evolution? You decide.

Much of the buzz in insurance analytics circles is centered on investigating new analytical methods. Some of the techniques that are getting the most attention right now include:

  • gradient boosting machines (GBMs)
  • penalized regression methods
  • neural networks
  • genetic algorithms
  • ensembles of different methods
Including more diverse yet relevant data assets to an analysis adds far more predictive power

While these methods are quite exciting, it’s equally important for insurers to recognize the potential impact of new data sources. Including more diverse yet relevant data assets to an analysis adds far more predictive power than using more complex algorithms on existing data, as evidenced by usage-based auto insurance.

Additionally, insurers need to explore what types of problems different methods can address. No single method is perfectly suited to every business problem, and a variety of methods can add value at different stages of the modeling process.

  • Topic modeling can help create new data features from unstructured text such as claims adjuster notes.
  • Elastic nets can be useful in selecting factors for consideration in modeling.
  • GBMs can help detect higher order interactions, and multivariate adaptive regression splines can help identify model hierarchies that capture complexity via a greater number of simpler models on well-defined segments. The end result is a more robust analysis.

In fact, many interviews with Kaggle competition winners suggest that they do not necessarily credit their successes to the primary modeling method but, rather, to methods that enable better model inputs or corrections to the primary methods.

The inevitable question from the top: Where’s the value?

The analytics team should examine both statistical and financial value measures

As insurance company management hears more about advanced analytical methods, it begs the question of how these new methods really add value – or more specifically, how you even measure value.

To provide a meaningful answer for management, the analytics team should examine both statistical and financial value measures.

Statistical measures

Statistical measures, such as the Gini coefficient or Mean Absolute Error (MAE), have meaning among actuaries and data scientists, but often don’t provide management with an intuitive sense for value added. Moreover, the measures themselves don’t often agree when ranking the accuracy of various methods.

Financial measures

Financial measures are imperative for getting buy-in and gaining confidence from management. For example, when exploring new methods or new data for pricing and underwriting, estimating the loss ratio on actual out-of-sample claims can more effectively engage company management.

We work with companies to design the right financial measures, including sensible underlying assumptions, to provide forecasts that make sense. In fact, in areas of the insurance company where data-driven solutions are relatively new, it’s even more important to prove the financial value of the models to leadership.

Need help unlocking your analytical potential?

Methods such as GLMs are well-accepted in areas such as pricing because of their transparency, ease of implementation (in traditional table-based rating engines) and execution speed. Other insurance applications place different values on the various dimensions. For example, producing direct mailing lists based on expected profitability and likelihood to buy does not require high levels of transparency, and implementation requires a list of addresses rather than inputs to table-based engines.

What’s needed to change?

Analytics are transforming the insurance industry. However, this requires thoughtful experimentation and constant consideration of implementation requirements.


 

This post was written by Claudine Modlin. Claudine Modlin specializes in P&C insurance analytics.

Categories: Analytics, Casualty, Insurance, Property | Tags:

Leave a Reply

Your email address will not be published. Required fields are marked *