July 30, 2025

From SEO to AI Optimization - Insights from the Insurance Sector

Introduction & Key Insights

Conducted by the ERGO Innovation Lab in partnership with ECODYNAMICS, this study investigates how conversational, LLM-based search is redefining digital visibility. By submitting 120 prompts across three insurance products to four AI search engines and benchmarking them against standard Google queries, the researchers retrieved over 33 000 URLs and discovered a striking pattern: broker and aggregator sites account for 36 percent of all citations, while insurers claim only 17 percent. Moreover, LLMs remain vulnerable to misinformation, with an average hallucination rate of 7.6 percent; ChatGPT leads the error chart at 9.7 percent, whereas You.com maintains the lowest rate at 3.1 percent. Four structural levers - machine readability, semantic linking, trust signals, and conversational formatting - emerge as decisive for being surfaced in AI answers.

Study Setup

To mirror realistic search behaviour, the team selected 18 representative Google terms and transformed them into a structured prompt grid of 120 inputs, varying funnel stage, brand context and complexity. Each prompt was executed ten times in ChatGPT, Gemini, Perplexity AI and You.com, while the Google terms were likewise issued ten times in the classic search bar. The test produced 25 441 unique LLM URLs, 5 851 Google-only results, and 2 074 hallucinations that were filtered out. A curated subset of 606 pages was then scored against 20 optimisation criteria grouped into four dimensions, providing a granular view of what content attributes actually matter for LLM retrieval.

Study Findings

First, platform behaviour diverges: ChatGPT and You.com surface the most links but also the most noise, while Gemini and Perplexity return fewer, yet more coherent, results. Second, brokers outperform carriers, not because their brands are stronger, but because their comparison-oriented, modular architecture aligns with the way language models stitch answers together. Third, structural cohesion beats authority: pages that are internally linked, semantically coherent and formatted as concise answer blocks achieve markedly higher retrieval stability than single-page, brand-centric narratives. Finally, the authors signal a strategic inflection point: unless insurers expose pricing, quoting and claims endpoints via APIs, upcoming agentic tools will bypass them entirely and complete customer journeys elsewhere.

Actionable Takeaways for AI-Era Visibility

  1. Raise technical accessibility: ensure lean HTML, sub-two-second load times and clear semantic markup to give LLM crawlers a clean parsing target.
  2. Organise content in topic clusters: a pillar page supported by tightly linked sub-pages boosts semantic density and retrieval consistency.
  3. Amplify trust signals: verified authorship, regulatory disclosures, HTTPS and rich schema.org snippets lower hallucination risk and increase citation odds.
  4. Embed conversational modules: short FAQ sections, scenario tables and step-by-step decision aids mirror prompt structure and are preferentially selected by AI engines.
  5. Expose business APIs: open quoting, pricing and claims interfaces so that agentic systems can compare, recommend and bind policies directly—brokers already exploit this gap.

Nukipa Brokr can help you with most of the above steps, from understanding if you are already visible in AI Searches and what you should do exactly to become more visible.

The entire study can be read here: https://www.ergo.com/en/newsroom/media-information/2025/20250627-ergo-whitepaper-llm-search

Category
Insights
Written by
Steffen Iwan
Founder Nukipa Labs