Contents
AI’s transformative potential cannot be overstated, especially for industries that are data-driven, like insurance. From streamlining underwriting and claims to providing real-time complex data analytics, fraud detection, marketing content generation, and personalized customer engagement, AI will reshape insurance operations. Providers will be able to execute their services more efficiently and accurately whereas customers will enjoy a wider range of dynamic services tailored to their needs and lifestyle habits.
However, alongside AI-driven transformation comes AI-driven disruption. Insurance providers should carefully balance AI opportunities with AI risks, ensuring that their AI integration and deployment processes don’t facilitate undue discrimination, bias, data and cybersecurity vulnerabilities, IP threats, operational failures (e.g., workflow disruptions or insufficient employee reskilling), and reputational damage.
Crucially, seeing as the insurance industry is heavily regulated, providers will need to prioritize compliance and the development of internal AI governance frameworks that correspond with existing and emerging sector-specific policy requirements—the array of risks and compliance requirements for health insurers will differ from those of property or life insurers. While recent AI regulations and initiatives such as the EU AI Act and SB21-169 are adopting a broad approach, setting AI integration and deployment standards for entire industries, sector-specific policies like Colorado Regulation 10-1-1, which targets life insurance providers, will continue to emerge. In essence, insurance providers should adopt a balanced view, weighing industry-wide compliance requirements with as much consideration as sector-specific requirements.
Moreover, given the deeply data-driven nature of the insurance industry, providers will need to evaluate their AI integration initiatives from the perspective of data governance, otherwise they risk incurring severe compliance penalties and possible market sanctions by breaching regulations such as the General Data Protection Regulation Act (GDPR) and California Consumer Privacy Act (CCPA).
Most importantly, certain sectors of the insurance industry, such as health and property insurance, are high-impact by definition, and therefore, insurance providers should carefully evaluate the role that AI plays in decision making both in terms of transparency and explainability in addition to unfair or inequitable customer treatment due to discriminatory or biased AI outputs. Current and emerging AI regulations—including the regulations mentioned above, and others operating outside the insurance industry such as the California Privacy Protection Agency’s Automated Decision Making Technology Regulations (CPPA ADMTR), California Assembly Bill No.331 (AB 331), and the White House’s Blueprint for an AI Bill of Rights—are placing a pronounced emphasis on fairness, transparency, and explainability, so insurers should prioritize these areas when envisioning their compliance strategy.
AI will profoundly impact the insurance industry. According to one survey, 58% of life insurers already use or plan to use AI, alongside 88% of auto insurers, and 70% of home insurers, respectively, whereas more broadly, 73% of businesses have adopted AI in at least one area as of 2023. Mckinsey also predicts that AI will contribute over one trillion dollars to the global insurance industry—generative AI (GenAI) will play a major role too, with forecasts estimating additional value contributions in the insurance and finance industry of up to $32B by 2027. Simply put, the AI tide is quickly rising, and the global insurance industry, given its heavy reliance on data, will be among those that are most affected from a regulatory standpoint.
Leveraging AI to Improve Compliance
Insurance providers must already deal with oceans of regulation. Fortunately, by leveraging AI, insurers can improve their provision of fair and equitable services to customers, address cyber and data security concerns, maintain transparency and explainability, and increase their abilities to ingest, analyze, and interpret regulatory requirements in meaningful and actionable ways, streamlining compliance. There are several potential AI benefits for insurers to consider, which are expanded upon below:
- Risk Analysis and modeling: AI, especially GenAI, can interpret massive amounts of structured and unstructured data from a wide array of diverse sources that are both traditional and non-traditional. From policy documents, financial statements, and medical records, to social media, wearable devices, and Internet of Things (IoT) sensors, AI-driven data analytics can capture and uncover nuanced risk-factors tied to insuring an individual or entity. Moreover, through pattern recognition, AI can highlight correlated risk factors that human underwriters can’t easily identify or overlook entirely. Together, these AI advantages enhance risk analysis and modeling, increasing insurers’ ability to accurately predict future claims, while also enabling more precise risk assessment and pricing. These processes can contribute to more fair and equitable customer treatment by dynamically considering a much wider array of pertinent factors relevant to underwriting and claims management.
- Mitigating fraud and cybersecurity threats: Detecting fraudulent claims or non-disclosure of relevant information during the application process can be challenging, especially as data streams increase in richness and complexity. Fortunately, AI pattern recognition can help identify data anomalies and inconsistencies as well as patterns indicative of potentially fraudulent activities. By leveraging AI for fraud and anomaly detection, insurers can improve data transparency during the application process. Alternatively, pattern recognition can be leveraged to identify deviations in cybersecurity protocols that are indicative of malicious behavior or cybersecurity breaches. In specialized cases, GenAI systems can also be designed to autonomously detect and counter cyber attacks. Robust AI-driven cybersecurity and fraud prevention measures will help insurers ensure that data security, privacy, and transparency are preserved in line with existing regulations such as the GDPR and CCPA, and that customers are treated fairly on a consistent basis.
- Document synthesis and interpretation: When evaluating the risk of insuring a particular individual or entity, underwriters must often synthesize and interpret large amounts of documentation. Via document summarization, synthetization, classification, and generation capabilities, GenAI systems can now handle most of this process with minimal human engagement, allowing underwriters to quickly extract actionable insights and make more informed decisions, especially in complex cases that require human judgment. Such capabilities allow underwriters to consider a wider array of nuanced factors that are specific to individuals, resulting in insurance policies that more fairly reflect the needs and preferences of those insured. These GenAI benefits can also help insurers develop an up-to-date and actionable understanding of current and emerging compliance requirements.
- Real-time complex data analytics: IoT devices and wearables are becoming increasingly popular, giving rise to new data streams that are continuously evolving—this kind of data can be extremely valuable for insurers, since it reveals real-world behaviors that can subsequently inform adjustments in policy premiums and pricing strategies. To dynamically reassess risks, insurers can leverage AI for real-time complex data analytics, customizing premiums and pricing strategies on the fly, to guarantee that customers are treated fairly. For instance, if someone always drives the speed limit in their Tesla, the vehicle may transmit this data to their auto insurer, where it undergoes real-time AI analysis, resulting in an immediate decrease in their policy premium.
- Personalized services: AI can parse risks more accurately and precisely than traditional methods, enabling the creation of more personalized insurance policies and customer-specific risk profiles. For instance, when leveraging AI for risk analysis and modeling, underwriters can develop individualized risk profiles for specific customers, offering more competitive pricing to low-risk customers or increasing premiums for high-risk customers. In more complex cases, underwriters can also leverage these capabilities to generate specialized policies or determine whether certain claims should be fulfilled. All in all, AI-driven personalized services can enhance transparency and contribute to more fair and equitable customer treatment, though insurers must be careful with potentially biased data since it can lead to discriminatory AI outputs and differential treatment.
- Customer engagement and self-service: By streamlining efficiency, accuracy, and the precision of the underwriting process, AI can dramatically reduce turnaround times for policy applications and claims processing, correspondingly improving the customer experience by enabling more equitable distribution of services—customers with particularly complex situations could receive services at virtually the same rate as those with simpler cases. In terms of guaranteeing transparency and explainability, AI chatbots can handle customer queries in real-time, and through GenAI-driven document synthesis and interpretation, insurers can provide customers with plain language explanations of the intended purpose, technical function, and role AI plays in decision making as it relates to them, the kind of data that was used to train the model and drive decision making, explanations of their rights as consumers, and methods by which to request re-evaluation by a human-in-the-loop.
Things to Look Out For
Opportunities and risks go hand-in-hand. Insurance providers will need to pay close attention to potential AI risks as opportunities unfold, otherwise, they may suffer considerable reputational damages, operational failures, and most importantly, compliance penalties. Below, we describe some of the core AI compliance risks for insurers to consider:
- Discrimination and bias: If AI systems are trained on data that is insufficiently representative or contains deep-seeded historical and systemic biases, systems may inadvertently perpetuate and amplify biases, resulting in unfair or discriminatory treatment—based on differential data characteristics such as race or gender—of certain customers. Biased AI outputs can also arise due to statistical and algorithmic biases, so it is especially important for insurance providers to rigorously evaluate their systems for validity and reliability prior to deployment, otherwise they risk costly legal and compliance penalties as well as reputational damages and trust erosion.
- Data privacy and security: The increasing complexity, richness, and scale of incoming data streams inspires serious data security and privacy concerns, especially in cases where sensitive data—health or financial data—is ingested. Unfortunately, the performance of AI systems depends, in large part, on the quality and quantity of the data on which they are trained, and given that the insurance sector frequently deals with sensitive data, such systems are likely to become cyber attack targets. Cybersecurity breaches, especially when they compromise consumer privacy, will result in costly compliance penalties. Insurers must therefore ensure data modernization, by leveraging cloud security infrastructures, data anonymization and encryption techniques, access controls and authentication measures, data security training protocols, and compliance with existing data governance regulations such as the GDPR and CCPA.
- Intellectual property: As insurance providers develop and acquire proprietary AI technologies, they must pay close attention to IP rights, taking steps to protect their own patents and copyrights without infringing upon those of others. Moreover, in cases where GenAI systems are leveraged to generate proprietary content, whether it concerns customer outreach, marketing, or sales, insurers must evaluate content prior to its release. Because many GenAI systems are trained on data that is scraped from various digital domains and platforms, AI-generated content may infringe upon the IP rights of individual businesses or consumers, and while such practices are still legally ambiguous, they are best avoided—legal disputes over AI-related IP can still be very costly and time-consuming.
- Hallucinations: Despite GenAI’s impressive and diverse array of capabilities, it can hallucinate information that ranges from being partially true to blatantly false. Wherever insurers choose to leverage GenAI, human oversight and validation mechanisms must be in place to ensure that AI-generated content is truthful and accurate. Most notably, where GenAI is utilized to drive, supplement, or assist with human-decision making processes, especially in consequential contexts such as determining whether to fulfill a claim, AI-generated content must be validated by an expert-in-the-loop, otherwise faulty and potentially harmful decisions might be made, leading to substantial compliance penalties. Regulations such as the EU AI Act, CPPA ADMTR and AB 331 directly target the use of AI in decision-making contexts, so insurers must pay close attention to the evolution of regulations in this domain.
- Rapid regulatory change: The AI regulatory landscape is rapidly changing, and within high-impact industries like insurance, it is very likely that complex sector-specific requirements will emerge on top of the already existing mountains of regulation. Navigating this regulatory ecosystem will be particularly challenging for insurers—some more so than others depending on their sector—but regulators will not see this as an excuse, nor will the public. Failure to comply with existing and emerging regulatory requirements will result in fines, and in more extreme circumstances, market sanctions. Fortunately, this is one area where GenAI can make a positive difference, namely by helping insurance providers synthesize and interpret existing regulatory documents, thereby informing the development and adaptability of their internal AI and data governance frameworks.
- Over-reliance on AI: AI systems are particularly attractive decision-making tools given their ability to make complex information more digestible, accessible, and actionable. However, even state-of-the-art systems like ChatGPT are still in the early stages of development, and are limited by an inability to generalize beyond training data to novel contexts or within changing environments. Moreover, such systems also struggle with long-term sequential planning and strategy execution, and while they may be leveraged to assist with such processes, an expert-in-the-loop should always be present. At this stage of AI innovation, insurers must understand that while AI systems may streamline complex processes like underwriting and claims management, they are not yet reliable enough to execute these processes without human oversight—an over-reliance on AI for complex decision making could lead to substantial harm and compliance penalties.
- Insufficient training and upskilling: The pressure to adopt AI quickly will not dwindle any time soon, and insurers, just like most other businesses, are scrambling to stay innovative and competitive. Nonetheless, any AI integration efforts will need to be accompanied by workforce training and upskilling opportunities, otherwise, insurers risk leveraging AI in ways that not only waste time and resources, but also give rise to a variety of different risks that can lead to various compliance consequences. AI training should be specific, targeting the skills that various teams and departments would require to leverage AI effectively and responsibly to accomplish their pre-identified business objectives. At a broader level, insurers should also prioritize workforce AI training that teaches employees how to identify potential AI risks, vulnerabilities, and limitations particular to the sector their company operates within to ensure that AI is leveraged responsibly and in a way that corresponds with compliance requirements.
Looking Ahead
Since the release of ChatGPT, competitive pressures to integrate AI across numerous industries and domains have grown dramatically. This doesn’t only increase the risk that companies will cut corners on safety and compliance, but also that they will waste time and resources pursuing AI initiatives that are misaligned with long-term goals or objectives in the interest of gaining a competitive edge. To maximize the expected value that AI delivers and ensure compliance, insurance providers should begin their AI integration process by considering a series of fundamental questions:
- Do we have the necessary technological infrastructure required to integrate AI effectively, and can this infrastructure be easily adjusted to accommodate novel AI developments?
- Do we already have an internal AI governance framework in place that enables responsible AI integration, and does this framework possess enough built-in adaptability and flexibility to address AI advancements and emerging compliance requirements?
- Have we identified the most prevalent AI risks and benefits specific to our company and sector, and do we have a strategy that allows us to mitigate and enact them appropriately?
- What role will AI play in decision making, and how will we ensure that systems are transparent and explainable in terms of their design, function, use, and impact on customers?
- What changes, if any, will we need to make to our data governance procedures to ensure that we integrate AI responsibly and that we comply with existing data governance policies such as the GDPR or CCPA?
- For what purposes do we intend to use AI, and do our AI integration goals align with our long-term business objectives?
- What teams and departments will use AI, and what metrics will we establish to effectively monitor their performance, and ensure that stated AI objectives are reached?
- Does our workforce possess the skills required to leverage AI effectively and responsibly, and if not, what upskilling and reskilling training procedures will we implement to overcome this problem?
- What are the sector-specific compliance requirements we need to consider, and how will we integrate these considerations into our organization’s AI governance framework effectively?
- Do we already have mechanisms in place that allow us to experiment with AI and test our applications prior to deployment to ensure safety, reliability, validity, transparency, explainability, and efficacy?
There are many more important compliance-related questions for insurers to consider at the granular level, namely, once they have already begun the AI integration process. However, these foundational questions, if taken seriously and answered with careful consideration, will help insurers lay the groundwork required to future-proof their organization.
Currently, we are only seeing the tip of the AI iceberg—over the coming years, advances in robotics and autonomous technologies, 3-D printing, IoT devices and edge processing, as well as further AI innovations, will contribute an unprecedented level of data complexity in the insurance landscape. From 3-D printed infrastructure and autonomous manufacturing to real-time lifestyle monitoring via wearable devices, insurers will have to grapple with a wealth of new data streams that are constantly evolving and expanding, challenging human’s ability to ingest, classify, and draw data insights efficiently, accurately, and securely.
Insurers will have to embrace AI as their new “best friend” in order to make sense of all this data, address technological risks and benefits appropriately, and guarantee compliance, shifting their mindset from a reactive “detect and repair” view to a more proactive “predict and prevent” strategy. Since AI integration is becoming a necessity due to competitive pressures and consumer expectations, insurers should begin developing AI and data governance frameworks now to fasttrack compliance with emerging regulations, maintain operational efficiencies, and uphold upstanding reputations.
Fortunately, we here at Lumenova AI have made it our mission to streamline the responsible AI integration process, and in doing so, have developed an all-in-one platform that can address your AI governance and risk management needs throughout the AI lifecycle. Whether you need to identify and develop relevant policies and frameworks, monitor and report on system performance, or measure and evaluate AI risks, our platform can help. To learn more, request a product demo today.