By Tanja Magas, Chief Data and Analytics Officer at Democrance
A renewed focus on the optimal harvest of data and the advances in Artificial intelligence (AI) only subtly began impacting the insurance industry some 20 years ago. Particularly, the space saw a spike in buzz with the development of neural networks in the 1990s.
But transformative activity picked up more recently – mainly due to developments in cloud computing, significant improvements in processing power, and a boom in digitization, automation, and data. According to the Swiss Re Institute, the number of AI-related patents filed by insurers increased by 5,600% over the last decade. And AI-related mentions in insurer investor reports skyrocketed from just about 0 to 116 in only 4 years.
All of which raises an important question: are we entering a new era of insurance and, if so, how can we bolster the wave?
Requirements are multi-fold and include an enterprise-level data strategy and vision, committed management, skilled and dedicated talent, as well as conducive regulations (that is, enabling data privacy and security laws).
But two key bottlenecks should be addressed first.
Firstly, organizations need to prioritize their data availability and quality. To train and inform, algorithms simply need clean, exhaustive, relevant, timely, and sufficient data. Key takeaway: insurance players need to invest in data engineering efforts. Hybrid models (based on causal inferences from comparative industries) should be explored as well as they can be less sensitive to data quality. Otherwise, the performance of models has proven to be slow, inaccurate, and even expensive compared to human-centric processes.
Secondly, organizations need to enable supportive business processes. Business use cases need to be perfectly matched with modeling techniques and underlying data sets. Rounds of trial and error are critical in testing alternative approaches. Lastly, business applications need to be carefully vetted: an error in generating a false lead is less costly than an underwriting miscalculation.
Sounds daunting? It will surely pay off. In the meantime, protagonists can already trial creative approaches to data applications. And some of the clearest practical applications hover in the underwriting vertical.
Imagine you want to underwrite a small business (let’s say, a restaurant), but beyond the traditional methods of only relying on a D&B or ISO report. So, you put together a Natural Language Processing-powered model that scrapes websites specific to the F&B industry. The algorithm picks up risk classification (high hazard) words from diner reviews on TripAdvisor or OpenTable and pulls them into the rating model. The output? A recommended higher premium to compensate for that signature flaming mocktail or self-serve fondue stove that would not appear in mainstream underwriting data sets. And of course, an untraditional and data-driven approach to underwriting (not to talk about the bragging rights over your peers).
And the potential is now tangible. According to McKinsey, 22% of cross-industry enterprises already attribute 5%+ of their EBIT to AI. And HBR estimates that AI will add $13 trillion to the global economy over the next decade.
But as most insurance players are at earlier stages of automation and technology compared to other industries, the opportunity for transformation and upside is of comparatively large potential. Exciting times lie ahead.