Why Is AI Transforming Insurance Underwriting?

Insurance companies increasingly deploy AI for underwriting decisions determining coverage eligibility and pricing, claims processing and fraud detection, risk assessment and predictive modeling, and customer service automation. AI promises to improve underwriting accuracy through sophisticated data analysis, reduce costs through automation, detect fraud more effectively, and personalize pricing based on individual risk profiles.

However, AI insurance applications create significant legal challenges including discrimination against protected classes in underwriting, unfair claim denials from algorithmic errors, lack of transparency in automated decisions, and privacy concerns from extensive data collection and analysis.

Insurers using AI must navigate complex regulatory frameworks including state insurance laws prohibiting unfair discrimination, federal anti-discrimination statutes, state departments of insurance oversight, and emerging AI-specific insurance regulations.

Understanding legal constraints on insurance AI is essential for compliance, avoiding regulatory enforcement, and maintaining consumer trust while leveraging technology to improve insurance operations.

Insurance Regulatory Framework

State Insurance Regulation

Insurance is primarily regulated at the state level through state departments of insurance or similar agencies. Each state has insurance codes and regulations governing underwriting practices, rate setting and approval, claims handling, and market conduct.

AI insurers must comply with requirements in every state where they operate.

National Association of Insurance Commissioners

NAIC develops model laws and regulations adopted by states with variations. NAIC has issued guidance on AI and big data in insurance addressing fairness and transparency, consumer protection, and regulatory oversight.

Federal Involvement

While states primarily regulate insurance, federal laws apply including anti-discrimination statutes like Fair Housing Act and Equal Credit Opportunity Act, FTC authority over unfair or deceptive practices, and federal oversight of certain insurance programs.

Unfair Discrimination Prohibitions

State Unfair Discrimination Laws

All states prohibit unfair discrimination in insurance. However, definitions of unfair discrimination vary. Generally, discrimination is unfair when based on characteristics not predictive of risk or loss, such as race, religion, national origin, or factors serving as proxies for protected characteristics.

Actuarially sound discrimination based on legitimate risk factors is typically permitted.

Protected Characteristics

State laws commonly prohibit discrimination based on race and color, religion, national origin, sex and gender, and increasingly sexual orientation and gender identity.

Some states prohibit additional bases like genetic information or disability.

Proxy Discrimination

AI models may use factors serving as proxies for protected characteristics creating disparate impact. For example, credit scores, ZIP codes, or occupation may correlate with race, creating indirect discrimination even without explicit use of race.

Regulators scrutinize proxies creating discriminatory effects.

Actuarial Soundness and Risk-Based Pricing

Actuarial Justification

Insurers can differentiate based on factors that are actuarially sound—statistically predictive of risk or loss, based on credible data, and applied consistently.

AI underwriting must demonstrate actuarial justification for risk classifications and pricing differences.

Balancing Fairness and Accuracy

Tension exists between actuarial accuracy and fairness. AI may identify subtle risk factors improving accuracy but creating classifications regulators deem unfair or discriminatory.

For example, AI might find education level predicts claims but using it may be prohibited if it creates proxy discrimination.

Disparate Impact Analysis

Even actuarially justified factors may be prohibited if they create substantial disparate impact on protected groups without sufficient justification. Insurers must conduct disparate impact analysis evaluating outcomes by protected characteristics, quantifying differences in approval rates or pricing, and assessing whether business necessity justifies disparities.

Federal Anti-Discrimination Laws

Fair Housing Act

The Fair Housing Act prohibits discrimination in housing-related transactions including homeowners and renters insurance based on race, color, national origin, religion, sex, familial status, or disability.

AI underwriting for property insurance must not create disparate impact on protected groups.

Equal Credit Opportunity Act

ECOA prohibits discrimination in credit transactions. While insurance isn’t credit, ECOA may apply to credit-based insurance scores and insurance premium financing.

Americans with Disabilities Act

ADA prohibits disability discrimination. Insurers cannot deny coverage based solely on disability unrelated to risk. However, insurers can base decisions on actuarial data about disability-related risk factors.

AI systems analyzing health data must carefully navigate disability discrimination prohibitions.

Data Privacy and Consumer Protection

Insurance Data Collection

AI underwriting relies on extensive data including traditional insurance data like claims history and credit reports, social media and online activity, telematics and IoT device data, and health tracking and wearable device data.

Data collection must comply with privacy laws and consumer expectations.

FCRA and Consumer Reports

The Fair Credit Reporting Act regulates use of consumer reports for insurance including requirements for permissible purposes, adverse action notices when coverage is denied or priced higher based on consumer reports, and consumer rights to access and dispute information.

AI using alternative data sources may trigger FCRA obligations if data constitutes consumer reports.

State Privacy Laws

State privacy laws like CCPA apply to insurers collecting consumer data requiring notice of data collection and use, consumer access and deletion rights, and opt-outs for data sales or certain uses.

Transparency and Explainability Requirements

Explanation of Adverse Actions

Insurance laws typically require explaining reasons for coverage denials, cancellations, or unfavorable pricing. AI-based decisions must generate explanations satisfying these requirements.

However, complex AI models may be difficult to explain in consumer-friendly terms.

State Transparency Initiatives

States including Colorado and California have enacted or proposed laws requiring insurers using AI to provide transparency about algorithms used in underwriting, data sources and factors considered, and how consumers can challenge automated decisions.

Model Governance and Documentation

Regulators expect insurers to maintain documentation of AI model development and validation, data sources and quality, bias testing results, and model performance monitoring.

AI in Auto Insurance

Telematics and Usage-Based Insurance

AI analyzes telematics data from vehicles or smartphones to assess driving behavior and personalize auto insurance pricing. This raises questions about privacy of location and driving data, accuracy of behavior assessment, and potential discrimination based on correlated factors.

Autonomous Vehicle Insurance

As autonomous vehicles proliferate, liability shifts from drivers to manufacturers and software providers. AI underwriting must adapt to assess autonomous system risks, manufacturer liability exposure, and cybersecurity vulnerabilities.

AI in Health and Life Insurance

Medical Underwriting Restrictions

The Affordable Care Act prohibits health insurers from denying coverage or varying premiums based on health status for ACA-compliant plans. However, life insurance and disability insurance can use health information for underwriting.

AI analyzing health data must navigate these distinctions.

Genetic Information Nondiscrimination Act

GINA prohibits health insurers and employers from discriminating based on genetic information. However, GINA doesn’t apply to life, disability, or long-term care insurance.

AI using genetic or genomic data faces complex compliance requirements.

Wearable and Health App Data

Insurers increasingly partner with fitness trackers and health apps to collect wellness data for underwriting or premium discounts. This creates concerns about data accuracy and representativeness, privacy and consent, and penalties for non-participation or poor performance.

AI in Claims Processing

Automated Claims Decisions

AI enables automated claims approval or denial for routine claims. However, unfair claims practices laws prohibit misrepresenting policy provisions, failing to acknowledge claims promptly, not conducting reasonable investigations, and denying claims without reasonable basis.

AI claims systems must satisfy these requirements.

Fraud Detection

AI fraud detection analyzes patterns suggesting fraudulent claims. While beneficial, false positives where legitimate claims are flagged as fraud can harm consumers.

Insurers must implement appeal processes and human review for fraud allegations.

Damage Assessment

AI analyzes photos or videos to assess property damage or estimate repair costs. Accuracy is critical—underestimating damage harms policyholders while overestimating inflates costs.

Rate Filing and Approval

Prior Approval States

Many states require insurers to file rates with regulators for approval before use. Rate filings must demonstrate that rates are not excessive, inadequate, or unfairly discriminatory.

AI-based rating algorithms must be explained in rate filings with sufficient detail for regulators to evaluate actuarial soundness.

File-and-Use and Use-and-File

Some states allow file-and-use (implement after filing) or use-and-file (implement immediately, file later). Even in these states, regulators can review rates and require changes.

Rate Filing Challenges

AI algorithms create challenges for rate filings including explaining complex models to regulators, protecting proprietary algorithms while demonstrating soundness, and updating rates as models evolve.

Model Risk Management

Regulatory Expectations

Regulators expect robust model risk management including model development and validation, independent review and challenge, ongoing monitoring and recalibration, and governance and oversight structures.

Bias Testing and Mitigation

Insurers should regularly test AI models for bias conducting disparate impact analysis, evaluating performance across demographic groups, and implementing mitigation strategies when bias is identified.

Model Documentation

Maintain comprehensive documentation of model methodology and assumptions, data sources and limitations, validation results, and performance metrics.

Consumer Rights and Appeals

Right to Human Review

Some jurisdictions require or are considering requirements for human review of adverse automated decisions. Consumers should be able to request reconsideration by humans rather than solely algorithmic determinations.

Dispute Resolution

Insurance laws provide consumer protections including internal appeals processes, external review by state departments of insurance, and litigation rights for wrongful denials.

AI-based decisions must be appealable through these mechanisms.

Third-Party AI Vendors

Vendor Due Diligence

Insurers using third-party AI vendors must conduct due diligence verifying vendor data sources and rights, model validation and bias testing, compliance with insurance regulations, and security and privacy practices.

Regulatory Responsibility

Insurers remain responsible for compliance even when using vendor AI. Regulators will not accept “the vendor did it” as defense for regulatory violations.

Contractual Protections

Vendor contracts should include compliance warranties, liability allocation and indemnification, audit and transparency rights, and termination rights if compliance issues arise.

Enforcement and Penalties

State Department Actions

State insurance regulators can take enforcement actions for AI violations including cease and desist orders, fines and penalties, license suspension or revocation, and corrective action plans.

Consumer Litigation

Consumers harmed by AI underwriting or claims decisions can sue for breach of contract, bad faith claims handling, unfair or deceptive practices, and discrimination.

Class actions may address systemic algorithmic discrimination.

Federal Enforcement

Federal agencies including FTC, HUD, and CFPB can pursue enforcement for violations of federal anti-discrimination or consumer protection laws.

Emerging State AI Insurance Laws

Colorado AI Insurance Regulation

Colorado has proposed regulations specifically addressing AI in insurance requiring external audits of AI systems, consumer notices about AI use, and mechanisms for consumers to appeal algorithmic decisions.

Other states are considering similar approaches.

NAIC Model Bulletin

NAIC issued model bulletin on use of big data providing guidance on compliance with existing laws while using AI and encouraging best practices for fairness and transparency.

Best Practices for AI Insurance Compliance

Proactive Compliance Programs

Implement comprehensive compliance programs including cross-functional teams with legal, actuarial, and technical expertise, regular compliance assessments, clear policies and procedures, and training for employees using AI systems.

Stakeholder Engagement

Engage with regulators proactively, participate in industry working groups developing standards, and respond to consumer concerns transparently.

Ethical AI Frameworks

Beyond legal compliance, adopt ethical AI principles emphasizing fairness and non-discrimination, transparency and explainability, accountability and oversight, and consumer benefit and protection.

Conclusion: Responsible AI in Insurance

AI offers transformative potential for insurance but requires careful legal compliance. Insurers must navigate complex anti-discrimination requirements, ensure actuarial justification for risk classifications, provide transparency and explainability, and maintain robust governance and oversight.

Proactive compliance protects against regulatory enforcement while building consumer trust essential for AI adoption.

Contact Rock LAW PLLC for Insurance AI Compliance

At Rock LAW PLLC, we help insurers navigate legal requirements for AI systems.

We assist with:

  • Insurance AI regulatory compliance
  • Discrimination and bias testing
  • Rate filing support for AI algorithms
  • Vendor contract negotiation
  • Regulatory investigation defense
  • Consumer litigation defense

Contact us for strategic counsel on AI in insurance underwriting and claims.

Related Articles:

Rock LAW PLLC
Business Focused. Intellectual Property Driven.
www.rock.law/