- Protect personal identifiable information (PII) against privacy breaches. PII is highly sensitive personal information that, if not adequately protected, can be prone to significant privacy breaches.
- Understand the best approach to protecting your insurance systems.
- Traditional methods are no longer effective because legacy systems may not be fully adapted for specific needs within the insurance industry.
- Insurers are unaware of PII risk points and how to comply with them.
Our Advice
Critical Insight
- Integrating privacy-preserving methods seamlessly into existing AI systems used in insurance will maintain system performance while safeguarding data privacy.
Impact and Result
- Determine which insurance-specific risks apply to your organization.
- Use a continuous improvement approach with metrics and a risk-based strategy that aligns with a privacy framework tailored to your organization’s needs.
- Training on AI is essential to help your employees understand the risks that AI can introduce and to influence overall culture.
Safeguard Your Data When Deploying AI in Your Insurance Systems
Protect PII by leveraging AI to reduce your risk of exposure.
Analyst perspective
Protect sensitive PII and navigate complex regulatory requirements.
Privacy risks in the insurance industry are growing as insurers adopt advanced AI technologies for underwriting, claims processing, and customer interactions.
Insurers handle vast amounts of data – from health records to financial histories – fed into AI systems that promise accuracy and efficiency but pose privacy concerns. A single breach could compromise thousands of customers' personal information, causing severe reputational and financial damage. It’s not just about what AI can do; it’s about ensuring it’s done securely and ethically.
Regulatory frameworks demand strict compliance, yet AI introduces complexities that make this harder. Insurers must ensure AI respects customer consent, limits data usage, and mitigates bias. Otherwise, the consequences could be costly in both fines and lost trust.
The industry must be proactive, implementing rigorous data governance, ensuring transparency, and fostering customer confidence in an era where AI promises much – but must be handled with care.
Arzoo Wadhvaniya
Research Specialist, Industry Research Info-Tech Research Group
Executive summary
Your ChallengeProtect personal identifiable information (PII) against privacy breaches. PII is highly sensitive personal information that, if not adequately protected, can be prone to significant privacy breaches. Understand the best approach to protecting your insurance systems. Traditional methods are no longer effective because legacy systems may not be fully adapted for specific needs within the insurance industry. Insurers are unaware of PII risk points and how to comply with them. |
Common ObstaclesAI presents new challenges for your organization. It does not have formal acceptable-use policies, your employees do not fully understand how it works, and your data security did not consider applications like it. Unfamiliarity with AI may create confusion about how to assess and mitigate risks, especially when determining how the technology can be used. Regulatory requirements can be complex and may not align seamlessly with AI processes, leading to compliance risks. |
Info-Tech’s ApproachDetermine which insurance-specific risks apply to your organization Use a continuous improvement approach with metrics and a risk-based strategy that aligns with a privacy framework tailored to your organization’s needs. Training on AI is essential to help your employees understand the risks that AI can introduce and to influence overall culture |
Info-Tech Insight
Integrating privacy-preserving methods seamlessly into existing AI systems used in insurance will maintain system performance while safeguarding data privacy.
Insurance has the highest number of data breaches
Source: “By Industry,” Statista, 2024
Source: “Data Breaches Worldwide,” Statista, 2024
Info-Tech Insight
Insurance faces significant risks from data breaches, with PII frequently compromised. Insurers must strengthen data protection strategies and proactively update security protocols to mitigate these growing threats.
PII is the primary target of a breach
Source: “Data Breaches,” Statista, 2024
Info-Tech Insight
Know your data and governance environment before you act. Scope the data that will potentially be impacted and ensure appropriate controls are in place.
Organizations are slow to recognize and react
It takes organizations an average
204 days
to identify a data breach and
73 days
to contain it.
Source: Secureframe, 2024
Source: “Data Breaches,” Statista, 2024
Info-Tech Insight
Delaying breach responses in insurance invites regulatory fines, erodes customer trust, and compounds financial losses. Swift action is essential to prevent long-term reputational, financial and operational damage.
Security and compliance are a significant challenge to implementing generative AI
Source: “Insurance CEO Outlook,” KPMG, 2023
Insurance IT departments need to improve the effectiveness and importance of key security and compliance processes.
Source: MGD Benchmark, Info-Tech Research Group, 2023
Info-Tech Insight
Policymakers and regulators should build on existing regulations when developing approaches to AI in insurance to balance customer protection and innovation.
Don’t fall behind on AI risk management
Leaders must define acceptable AI use and enhance data security to mitigate emerging risks.
Leaders that are unsure how to evaluate risks that come with AI need to evaluate and assess their risk management practice.
- Assess risk and define acceptable use. These are the first key steps to AI security improvement.
- Reevaluate your data security controls and plan necessary improvements to further mitigate risks associated with enterprise use of AI.
Download AI Risk Assessment Tool
Download Govern the Use of AI Responsibly With a Fit-for-Purpose Structure
IT leaders are concerned about new risks with AI
71%
IT leaders who believe AI will introduce new data security risks
Source: “US Survey,” KPMG, 2023
Few organizations dedicate a team to manage AI risks.
6%
Organizations with a dedicated risk assessment team for AI
Source: “US Survey,” KPMG, 2023
Info-Tech Insight
What many miss about these risks is that most are new versions of familiar data security risks. They can be mitigated by defining acceptable use and necessary security controls to support governance of AI.
Protect sensitive information in all Gen AI systems
Generative AI requires careful management of PII to ensure security and compliance.
Key risk types for Gen AI
Data Security
The greatest risk associated with using Gen AI is a loss of data confidentiality and integrity from inputting sensitive data into the AI system or using unverified outputs from it.
Data Privacy
Care must be taken when choosing whether to enter a given data type into an AI system. This is especially true in a publicly available system, which is likely to incorporate that information into its training data. Problems may still arise in a private model, particularly if it is trained using PII or personal health information (PHI), as such information may appear in a Gen AI output.
Data Integrity
Data integrity risk comes from repeatedly using unverified Gen AI outputs. A single output with faulty data may not cause much trouble, but if these low-quality outputs are added to databases, they may compromise the integrity of your records over time.
Each data risk type has varying risk factors for PII
Prioritize security and compliance to protect sensitive information in Gen AI systems
The insurance industry is most susceptible to three specific types of risks associated with Gen AI.
Data Security
Data Breaches of PII
AI systems within insurance companies handle vast amounts of sensitive customer data, including health records, financial details, and personal identifiers. These systems, if not adequately secured, can become targets for cyberattacks, leading to unauthorized access to sensitive information.
Data Privacy
Insider Threats
Employees or third-party contractors with authorized access to AI systems and sensitive customer data may exploit their privileges, either intentionally or through negligence. This can lead to data theft, manipulation of critical AI models, or tampering with claims and pricing algorithms.
Regulations
Noncompliance With Regulations
Privacy regulations like the General Data Protection Regulation (GDPR) and the Health Insurance Portability and Accountability Act (HIPAA) impose strict requirements on how customer data is collected, processed, and stored. AI systems in insurance, which often require large datasets to function effectively, may unintentionally violate these regulations if not properly designed and monitored.
Measure the value of this research
Expedite your policy and lower risk.
IT leaders prioritizing Gen AI in the next 18 months
67%
Source: Salesforce, 2023
Use the insurance capability map to pinpoint and assess Gen AI risk exposure within the organization
Business Capability Map Defined
A business capability map defines what a business does to enable value creation, rather than how. Business capabilities:
- Represent stable business functions.
- Are unique and independent of each other.
- Typically, will have a defined business outcome.
A capability map is a great starting point to identify value chains within an organization as it is a strong indicator of the processes involved in delivering on the value streams.
Download the Insurance Industry Reference Architecture Template
Info-Tech Insight
Leverage the Insurance Industry Reference Architecture to define value streams and value chains.
Map risk types to your PII exposure
Determine the primary risk types and the degree of risk exposure.
Illustrative example for insurance companies
Assess risks of Gen AI
Risks depend on the use case.
- Exactly which risk factors apply (and to what extent) will depend on your Gen AI use case, with the biggest variables being whether you’re inputting data, using an output from the system, and whether the system is public or private.
- For example, asking the system to organize a data input so that you can use the output carries a lower data confidentiality risk in a private system than in a public one because the information isn’t shared beyond the organization’s AI system.
- However, another possible use case is asking a public AI system to generate a data set by compiling industry statistics, which carries virtually no input-related risk but has a significant data quality/integrity risk because the system may use unknown or even fictitious sources.
“All AI models generate text based on training data and the input they receive. Companies may not have complete control over the output, which could potentially expose sensitive or inappropriate content during conversations. Information inadvertently included in a conversation with a Gen AI presents a risk of disclosure to unauthorized parties.”
– Eric Schmitt, Global Chief Information Security Officer, Sedgwick
Info-Tech Insight
Watch for overlap. There will usually be both an input and an output component when using Gen AI, which means both risk factors are present, but one may be dominant. Therefore, both inputs and outputs should receive sign-off before use to limit data confidentiality and integrity risks.
Activity: Gen AI Risk Map
Download Generative AI Risk Map for the Insurance Industry
Determine the risks associated with your Gen AI use case and the applicable policy statements.
- Determine which risks apply to your AI insurance use case (i.e. data breaches of PII, insider threats, noncompliance with privacy regulations).
- Review the risk map on Tab 3 to better understand how those risks are realized and to determine what mitigating tactics and policy statements are required.
- Note any key mitigating tactics or policy statements that are not currently well represented in your security program.
Info-Tech Insight
Look for problems before getting invested. While Gen AI opens many possibilities, some of its risks will be difficult to address. For example, if your proposed use case requires sensitive data to be entered into a public AI system to produce an output for use in your supply chain, it will be virtually impossible to mitigate such risks effectively.
Activity: Update the AI Security Policy Template for the Insurance Industry
Download AI Security Policy Template for the Insurance Industry
- After determining the applicable policy statements, update this policy template by deleting the statements that don’t apply.
- Each policy statement is cross-referenced using the code provided in the risk map.
Research contributors and experts
Rob Tyrie
Special Advisor, AI and Insurance
Accern
Kate Wood
AVP, Research - Development
Info-Tech Research Group
Jody Gunderman
Executive Counselor, Global Services
Info-Tech Research Group
Christine West
Senior Managing Partner, Financial & Professional Services
Info-Tech Research Group
Safayat Moahamad
Research Director, Research - Development
Info-Tech Research Group
Related Info-Tech research
Priorities for Adopting an Exponential IT Mindset in the Insurance Industry
- The insurance industry has already gone through a period of rapid change that has resulted in many companies falling further behind.
- Exponential IT is an opportunity for your company to adopt new technologies and capabilities that will allow it to compete more effectively.
Address Security and Privacy Risks for Generative AI
- As organizations adopt generative AI use cases, they are confronted with important security and privacy risks, including governing enterprise use of generative AI to maximize benefits and minimize risks and protecting data confidentiality and integrity when using generative AI systems.
Insurance Core Systems Modernization
- Insurers must start addressing issues caused by old infrastructure and technology platforms using legacy systems.
- Understand critical challenges faced by Insurance firms when modernizing their existing insurance technology.