The Path to AI-Powered Insurance Strategy Starts With Program Visibility

In a recent Redhand Advisors RiskTech Webinar, I was joined by Ryan Cantor, Chief Product & Technology Officer at Origami Risk, we explored a challenge that continues to surface in renewal discussions across industries: organizations cannot apply AI meaningfully to insurance programs until they establish a strong data foundation that delivers true program visibility.

The session focused on a simple but critical premise: while enthusiasm for AI in risk and insurance is growing rapidly, most organizations are still constrained by fragmented, inconsistent, or poorly structured policy and program data. Until that issue is addressed, AI remains more aspirational than operational.

AI ambition is high, but data readiness is the real constraint

Across the market, risk and insurance teams are being asked a familiar question: “What’s your AI strategy?”

The challenge is that many teams jump straight to advanced outcomes – predictive analytics, scenario modeling, automated insights, without first addressing the foundational work required to support those capabilities.

In practice, policy and program data often lives in:

· spreadsheets and shared folders

· broker-managed files

· emails and PDFs

· systems that store only high-level policy “header” data

The result is predictable: answers aren’t readily available, analysis is slow, and renewal preparation becomes increasingly reactive. AI does not solve this problem on its own – it exposes it.

Renewals are not a transaction – they are a project

One of the strongest themes from the discussion was the need to reframe how organizations think about insurance renewals and program management.

Renewals increasingly resemble a project lifecycle, not a simple annual transaction:

· multiple stakeholders and dependencies

· structured phases and milestones

· iterative questions, feedback loops, and decisions

· growing data and documentation requirements

When renewals are treated as one-off events, organizations underinvest in process discipline, ownership, and governance. When they are treated as ongoing projects, it becomes easier to justify structured workflows, centralized systems of record, and automation.

What risk teams told us: progress is real, but gaps remain

Live polling during the webinar reinforced what Redhand continues to see in advisory work:

Policy data maturity is improving, but many organizations still describe their data as:

· partially organized

· fragmented across systems

· inconsistent year over year

Confidence in renewal visibility followed a similar pattern. While some teams feel very confident, many still rely on external support, often brokers, to fill gaps. That reliance limits internal insight, flexibility, and long-term leverage.

When asked about readiness to apply AI to insurance programs, most respondents fell into a middle ground: somewhat ready, but with foundational work still required.

Why policy data is still treated as static documentation

Historically, insurance policies were treated as static artifacts:

“Here’s the policy. File it. Move on until next year.”

That mindset no longer works.

Policy and program data should function as a strategic asset, not a reference document. When it remains static and unstructured, organizations face three recurring issues:

1. Limited accessibility – key details (limits, exclusions, endorsements) are buried in PDFs

2. Fragmentation – different stakeholders hold different versions of the truth

3. Disconnected analysis – policy data is not linked to claims, exposures, or financial outcomes

These gaps directly impact decision quality, renewal outcomes, and total cost of risk.

A practical roadmap: building the foundation for AI-enabled programs

Rather than jumping to advanced analytics, the discussion outlined a clear progression that aligns with Redhand’s RMIS and RiskTech advisory experience:

1. Centralize policy and program data

Establish a single system of record with consistent structure. Perfection is not required but repeatability is.

2. Use AI to automate ingestion

Modern AI can extract structured data from policies, quotes, and endorsements far more efficiently than traditional OCR or manual processes, dramatically reducing time and effort while improving accuracy.

3. Standardize data year over year

Consistent schemas enable trending, benchmarking, and scenario analysis. Without standardization, every renewal becomes a reinvention exercise.

4. Enrich and connect data before scaling analytics

Once data is centralized and standardized, organizations can:

· connect policy data to claims and exposures

· monitor erosion and coverage response

· run “what-if” scenarios

· enable real-time, question-based analysis

The sequence matters. AI delivers value only when the foundation is ready.

Getting started without a massive data cleanup initiative

A recurring concern is resources: “We know this work needs to happen, but where do we start?”

The most effective approach is pragmatic:

· start with the current or upcoming renewal

· pilot with one program or policy

· focus on early wins that reduce friction or cycle time

· use results to build internal buy-in

A go-forward strategy alone can unlock meaningful value. Historical data can be added selectively, only where it supports clear business outcomes.

Broker collaboration improves when clients own the data

Improving internal visibility does not weaken broker relationships, it strengthens them.

When organizations treat program data as their own strategic asset:

· collaboration becomes more structured

· workflows become clearer

· decision-making improves

The goal is not to remove brokers from the process, but to move beyond email-driven exchanges and spreadsheet dependency toward shared workflows with clearer ownership and governance.

Are you “ready” for AI? The honest answer: Yes, if you start correctly.

Ryan offered a provocative answer to the question “How do we know we’re ready?”: You’re ready now.

Because meaningful AI adoption doesn’t require:

· custom models trained on your data

· multi-year science projects

· perfect data maturity on day one

It requires:

· starting with specific, high friction use cases

· proving value with a pilot

· putting humans in the loop

· and steadily improving data foundations as you scale

The bigger risk isn’t experimenting too early. It’s waiting too long because renewal complexity and market expectations for data will continue to rise.

Watch the webinar on-demand

If you’d like to see the full discussion in detail, you can watch the webinar on-demand here:

https://register.gotowebinar.com/recording/2203120488878529452/