Machine learning and Artificial Intelligence (AI) have been used by the insurance industry for more than a decade, yet this fast-moving technology is ever-changing, continually reshaping the form and function of the insurance process.
Insurance companies use AI “to make their business more efficient," says Kathleen Birrane, Maryland Insurance Commissioner and the chair of innovation, cybersecurity, and technology committee at the National Association of Insurance Commissioners. "They use it in the underwriting process and in evaluating where the risk is in the claims process. There is no part of the process that doesn't use this technology."
How AI Is Being Used in Insurance
AI is simply the technology of computers simulating human reasoning by analyzing masses of historical data. "It has been around for decades," Birrane says. But now, that technology is at the heart of the insurance business. For instance, AI now sifts through vast quantities of data to evaluate consumer behavior. It is used heavily to identify suspected fraud, such as a claim for injury compensation when no accident occurred. AI systems can be trained to look for specific characteristics of a claim that are associated with past fraud to flag current questionable claims. Insurance companies are required by regulators to make sure that seemingly illegitimate insurance claims get investigated.
Assessing risk is also where AI shines. Seventy years ago, an insurance agent would page through charts in books created by actuaries to determine the likely risks associated with a function or the age of a person who wish to be insured. Now, instead of manually referring to five or six risk-related factors, computers use algorithms to sift through data looking for correlations between risk and various characteristics. This process of algorithm-powered data analysis is known as machine learning.
"We ask the question: What characteristics are predictive of risk? And we do that by mining past data [to create a set of parameters for use today]," Brianne says. The likelihood of a risk, such as a traffic accident occurring, is at the core of the insurance business.
Today’s AI Is Still Driven by Skilled Underwriters
To be clear: AI does not make decisions for a company. Rather, it creates front-end efficiencies that allow human employees to vet more sophisticated data. "We're in that phase now from a predictive standpoint where we can look at the data, make an assessment, and then the underwriter or the individual reviewing that risk is still making some decisions," says Brad John, head of the life sciences industry practice at The Hartford.
However, human autonomy may shrink in the future. "In the not-too-distant future, AI is going to become robust enough that it can actually start to make some prescriptive decisions," John says.
Such machine-led output could soon involve what coverages get offered or what are the terms and conditions for a particular risk, John says. "It could be something as simple as a decision tree where there's ultimately a yes or no answer, or it could be something that's more nuanced."
AI Use Accelerates for Insurers – and the Insured
The technology behind AI is not just used in insurance, it is also fast becoming an integral part of the items that are insured themselves. This adds yet another wrinkle to the process of assessing risk. For instance, self-driving vehicles and factory robots are increasingly likely to utilize AI, says Andrew Zarkowsky, head of the global technology practice at The Hartford. "Now, all of a sudden, software has the ability to control something that can cause bodily injury or property damage, and it really does change the dynamic for the general liability perspective."
Zarkowsky says the risks of an autonomous vehicle causing damage are quite different depending on the setting. A robot in a factory moving goods around where there are no people present on the shop floor has quite a different risk profile to a car driving in downtown Boston. The factory-bound robot exists at one end of the risk spectrum in a controlled environment, with relatively lower risk, while the passenger vehicle in a town center is at the other. "You've got people, dogs, potholes, and all these things that are not controlled," he says.
Due to the complexity of insuring AI-driven activities, such as cars and robots, The Hartford employs highly trained industry specialists to oversee the underwriting of emerging technologies such as autonomous vehicles and robots. "I think you need to have that layer of expertise and that that's where the skilled human underwriter comes in," Zarkowsky says.
The industrial and personal use of AI-driven systems also begs the question: Who is liable when something goes wrong with computer software?
"It's a gray area," says Jonathan Schaeffer, professor of computing science at the University of Alberta and artificial intelligence researcher. In many cases, software agreements require the user to agree to absolve the company of all liability if something goes awry. "In that case, there's nothing you can do," he says.
Assessing who is liable for damage or injury caused by a self-driving car is trickier, Schaeffer says. "Is it the software, the hardware, or the human who is in the car?" These are questions still being debated because the use of AI technology has only relatively recently become mainstream. "There are no general guidelines," he says. "I think some of these questions should be answered by the government and by the public."
So far, different states have gradually developed their own rules for what is allowed on the road,1 and some have rules over who is liable in the event of an accident.2 However, given the various levels of 'self-driving' vehicles and the lack of case history with autonomous vehicles, it's hard to gauge which rules will prevail and in what circumstances.
"I think we're going to move away from estimating likely outcomes to prescriptive, which says what I as an insurance provider should do about it," The Hartford’s John says. That could mean adding caveats to a coverage contract or limiting what is covered in some ways based on the prescriptive recommendations of the AI analysis.
There are currently no global standards for developing AI, and there appears to be no general best practice. "Right now, the field is self-policing. There may come a point where the rules become entrenched in law and industry best practices," Schaeffer says.
“The bottom line is that AI and machine learning in the industry is not going away,” John adds. “It’s now a race to see how to best capitalize on its potential in a responsible way, and who will get there first.”
1 Governors Highway Safety Association, Autonomous Vehicles, 2021
2 Autonomous Vehicle Laws, Insurance Institute for Highway Safety (IIHS), July 2022
Brought to you by The Hartford. The content displayed is for information only and does not constitute an endorsement by, or represent the view of, The Hartford.
Information and links from this article are provided for your convenience only. Neither references to third parties, nor the provision of any link imply an endorsement or association between The Hartford and the third party or non-Hartford site, respectively.
The Hartford is not responsible for and makes no representation or warranty regarding the contents, completeness, accuracy, or security of any material within this article or on such sites. Your use of information and access to such non-Hartford sites is at your own risk. You should always consult a professional.
The Hartford Financial Services Group, Inc., (NYSE: HIG) operates through its subsidiaries, including the underwriting company Hartford Fire insurance Company, under the brand name, The Hartford®, and is headquartered in Hartford, CT. For additional details, please read The Hartford’s legal notice at www.thehartford.com.