Research·Baz Furby·10 min read

AI Visibility Benchmark Report 2026: Which Brands Get Cited by AI (and Which Don't)

Surfaceable analysed AI citation data across 60 brands and 20 industries. Here's what the data shows about which brands get mentioned by ChatGPT, Claude, Gemini, and Perplexity.


Most conversations about AI visibility are driven by intuition, anecdote, and extrapolation from SEO theory. What's been missing is data: systematic, cross-platform measurement of which brands actually get cited, how often, and what distinguishes the ones at the top from the ones that barely register.

This report presents findings from Surfaceable's analysis of citation behaviour across 60 brands in 20 industries, tracked across ChatGPT, Claude, Gemini, and Perplexity. The aim is straightforward — give marketers, SEO professionals, and brand teams a factual baseline for understanding AI visibility performance, so they can make rational decisions about where to invest.


Key Findings at a Glance

  • Average AI visibility score across all tracked brands: 62/100 (range: 20–94)
  • Top-performing industry: Consulting and Professional Services (average score: 78/100)
  • Lowest-performing industry: Local Services (average score: 31/100)
  • Gemini cites brands 23% more frequently than ChatGPT on commercial queries
  • Perplexity is the most likely platform to cite smaller brands — lower citation threshold than other platforms
  • Only 8% of brands have a valid llms.txt file — the lowest adoption of any tracked signal
  • 30% of brands had partial AI crawler blocks in their robots.txt

Overall AI Visibility Scores

The average AI visibility score across the 60 brands in this analysis is 62 out of 100. That figure is higher than many practitioners expect — a reflection of the fact that brands with meaningful digital presence tend to have some baseline AI visibility by default, simply by existing in the data sources AI tools draw from.

However, the range tells the more important story. The lowest-scoring brand in this analysis scored 20/100. The highest scored 94/100. That 74-point spread between floor and ceiling represents an enormous performance gap — and it's almost entirely explained by deliberate investment (or lack of it) in the signals that drive AI citation.

The median score is 61/100, close to the mean, indicating a broadly even distribution without extreme skewing in either direction. Roughly one-third of brands scored below 50, placing them in territory where AI citation is sporadic and unreliable. One-fifth scored above 80, placing them in territory where AI citation is consistent and their brand forms part of the standard response set for relevant queries.

What the Score Measures

Surfaceable's AI visibility score is a composite of four primary metrics:

  1. Presence rate — percentage of relevant prompts in which the brand is cited
  2. Position score — average position within responses when cited (earlier = better)
  3. Accuracy score — how accurately AI tools describe the brand, its category, and its offering
  4. Cross-platform consistency — whether citation behaviour is consistent across ChatGPT, Claude, Gemini, and Perplexity or heavily skewed to one platform

A brand can have a high presence rate but a low accuracy score if AI tools cite it incorrectly or with outdated information. Both dimensions matter.


Performance by Industry

Highest-Scoring Industries

Consulting and Professional Services: 78/100 average

Professional services firms benefit from two structural advantages. First, they tend to be referenced extensively in business press, case studies, and industry publications — sources AI tools weight heavily. Second, consulting firms have historically invested in thought leadership content: research reports, methodology documentation, and detailed how-to guides that are structurally ideal for AI citation.

B2B SaaS and Technology: 74/100 average

Technology brands generally have strong digital footprints — active blogs, detailed product documentation, substantial press coverage, and consistent profiles on software review platforms. These signals accumulate into solid AI visibility. Several B2B SaaS brands in this analysis scored above 85/100, outperforming companies with ten times their revenue.

Financial Services: 69/100 average

Large financial institutions benefit from entity authority (extensive Wikipedia coverage, Knowledge Panel presence, high domain authority) even when their content isn't specifically structured for AI citation. The upper end of financial services is strong; the lower end is pulled down by smaller firms with poor structured data and inconsistent brand information.

E-commerce and Retail: 61/100 average

E-commerce brands show more variance than any other sector. Direct-to-consumer brands with strong content strategies and community presence often outperform large retailers that rely on catalogue-style product pages rather than structured, informational content.

Lowest-Scoring Industries

Local Services: 31/100 average

This is the sharpest underperformance gap in the data. Local services businesses — tradespeople, independent retailers, location-specific service providers — have minimal representation in AI training data and typically lack the content infrastructure that drives citations. AI tools have fundamentally less to draw from about hyper-local businesses, and the brands themselves have rarely invested in the signals that would improve this.

Healthcare and Medical: 44/100 average

Healthcare brands face a combination of structural challenges: conservative content strategies driven by regulatory constraints, limited third-party citation beyond directory listings, and AI tools' tendency to defer to general health authorities (NHS, Mayo Clinic, WebMD) rather than individual healthcare brands on medical queries.

Hospitality and Travel: 48/100 average

Travel brands show low AI visibility relative to their size and digital investment. The primary issue is content type: large travel sites are optimised for transactional search, not informational queries, and lack the structured how-to and FAQ content that drives AI citation in discovery and planning queries.


Platform-by-Platform Differences

Gemini Cites Brands More Frequently on Commercial Queries

Across all commercial-intent prompts in this analysis, Gemini cited brands 23% more frequently than ChatGPT. This aligns with Gemini's deep integration with Google's commercial index and its tendency to surface product and brand recommendations on purchase-adjacent queries.

For B2B brands, Gemini is the platform with the highest citation rate on "what tool should I use for X?" style queries. If commercial citation is your primary goal, Gemini performance should be the priority measurement.

Perplexity Has the Lowest Citation Threshold

Perplexity cited the widest range of brands across the analysis — including smaller, newer, and less well-known brands that rarely appeared in ChatGPT or Gemini responses. This is consistent with Perplexity's aggressive real-time web crawling and its tendency to cite sources that directly answer the query, regardless of the source's domain authority tier.

The practical implication: Perplexity is the most accessible platform for brands that haven't yet built significant entity authority. Direct-answer content on a well-crawled site can generate Perplexity citations relatively quickly, even without the broader brand signals required for ChatGPT or Gemini presence.

ChatGPT Is Most Conservative

ChatGPT showed the most conservative citation behaviour — it cited the fewest distinct brands and concentrated its citations most heavily on well-established names. This reflects its stronger weighting of training data authority signals compared to Perplexity's real-time web approach. New or smaller brands find ChatGPT the hardest platform to break into without substantial third-party mention history.

Claude Shows Highest Accuracy Scores

Across all platforms, brands cited by Claude tended to receive more accurate descriptions of their product and positioning. Claude's tendency toward careful, nuanced responses translates into more precise brand descriptions — but also a higher threshold for initial citation. Brands cited by Claude tend to have stronger entity consistency across sources.


Brand Size vs AI Visibility: The Surprising Result

One of the more striking findings in this analysis is the weak correlation between brand size and AI visibility score. Large enterprise brands do not reliably outperform mid-market brands on AI citations.

Several mid-market B2B SaaS companies in this analysis scored above 80/100 — outperforming Fortune 500 companies in adjacent categories. The explanation is consistent across cases: the mid-market brands had invested specifically in AI-citation-oriented content (structured, answer-led, topic-cluster architecture, FAQ schema), while the enterprise brands relied on brand recognition and domain authority that doesn't automatically translate into AI citation performance.

This is one of the most important findings for practitioners: AI visibility is a leveller. The signals AI tools respond to — content structure, entity consistency, topical depth, schema implementation — are as accessible to a 50-person company as a 5,000-person one. Large brands have advantages in entity authority from sheer volume of press coverage and Wikipedia presence, but these can be partially offset by deliberate investment in content quality and structure.


Top Signals Correlated with High AI Visibility Scores

Across all 60 brands, the following signals showed the strongest positive correlation with AI visibility scores above 75/100:

  1. Structured data present on key pages (Organisation, FAQPage, Article schema)
  2. FAQ-format content pages specifically addressing target queries
  3. Wikipedia and/or Wikidata entity entry for the organisation
  4. Active blog with 15 or more published posts in the past 12 months
  5. Google Business Profile claimed and verified
  6. Consistent brand description across major review platforms (G2, Capterra, Trustpilot, Crunchbase)
  7. AI crawler access — no robots.txt restrictions blocking major AI crawlers

The Signals Most Brands Are Missing

The lowest adoption rates among all tracked brands were:

llms.txt file: 8% adoption. This specification, which allows brands to explicitly communicate content structure and permissions to AI tools, has seen minimal uptake despite growing awareness. Brands that have implemented a valid llms.txt file showed measurably better citation accuracy — AI tools that respect the file have a clearer picture of the brand's content structure.

AI crawler access: 70% of brands fully open; 30% had partial or complete AI crawler blocks. Many of these blocks were unintentional — legacy robots.txt configurations that were never updated to account for AI crawlers like GPTBot, ClaudeBot, PerplexityBot, and Google-Extended. Brands with partial AI crawler blocks showed significantly lower Perplexity citation rates, where real-time web crawling is central to the platform's operation.

FAQPage schema: 34% adoption. Despite wide awareness of FAQ schema as an AEO tactic, two-thirds of tracked brands have not implemented it on their answer-structured content pages.


Strategic Implications

For Brands Scoring Below 50/100

The priority is fundamentals: unblock AI crawlers, implement Organisation schema, ensure brand information consistency across review platforms, and publish direct-answer content on your top 10 target queries. These actions alone can produce 15–25 point score improvements within 60–90 days.

For Brands Scoring 50–70/100

This is the most common range — decent baseline visibility, but significant room for improvement. The focus should be entity strengthening (Wikipedia/Wikidata if applicable, Knowledge Panel verification) and topical depth (building out content clusters rather than isolated blog posts). Adding llms.txt and reviewing robots.txt for inadvertent AI crawler blocks are high-impact, low-effort actions.

For Brands Scoring Above 70/100

Brands in this range typically have strong fundamentals and solid entity presence. The focus shifts to platform-specific optimisation, prompt coverage expansion (identifying and targeting queries you're not yet appearing in), and monitoring share of voice relative to competitors.


Check Your Own Score

Every metric in this report is measurable. If you don't know where your brand currently sits, that's the first problem to solve.

Surfaceable runs a free AI visibility audit that gives you your current presence rate, position score, and the specific gaps — missing schema, crawler blocks, entity signal weaknesses — that are holding down your score. The brands that act on this data now are the ones that will hold defensible AI citation positions when AEO becomes standard practice.

The average brand in this analysis scored 62/100. That leaves substantial room for improvement — and the brands building that advantage today are the ones that will still hold it in three years.


Try Surfaceable

Track your brand's AI visibility

See how often ChatGPT, Claude, Gemini, and Perplexity mention your brand — and get a full technical SEO audit. Free to start.

Get started free →