Surfaceable analysed AI citation data across 60 brands and 20 industries. Here's what the data shows about which brands get mentioned by ChatGPT, Claude, Gemini, and Perplexity.
Most conversations about AI visibility are driven by intuition, anecdote, and extrapolation from SEO theory. What's been missing is data: systematic, cross-platform measurement of which brands actually get cited, how often, and what distinguishes the ones at the top from the ones that barely register.
This report presents findings from Surfaceable's analysis of citation behaviour across 60 brands in 20 industries, tracked across ChatGPT, Claude, Gemini, and Perplexity. The aim is straightforward — give marketers, SEO professionals, and brand teams a factual baseline for understanding AI visibility performance, so they can make rational decisions about where to invest.
The average AI visibility score across the 60 brands in this analysis is 62 out of 100. That figure is higher than many practitioners expect — a reflection of the fact that brands with meaningful digital presence tend to have some baseline AI visibility by default, simply by existing in the data sources AI tools draw from.
However, the range tells the more important story. The lowest-scoring brand in this analysis scored 20/100. The highest scored 94/100. That 74-point spread between floor and ceiling represents an enormous performance gap — and it's almost entirely explained by deliberate investment (or lack of it) in the signals that drive AI citation.
The median score is 61/100, close to the mean, indicating a broadly even distribution without extreme skewing in either direction. Roughly one-third of brands scored below 50, placing them in territory where AI citation is sporadic and unreliable. One-fifth scored above 80, placing them in territory where AI citation is consistent and their brand forms part of the standard response set for relevant queries.
Surfaceable's AI visibility score is a composite of four primary metrics:
A brand can have a high presence rate but a low accuracy score if AI tools cite it incorrectly or with outdated information. Both dimensions matter.
Consulting and Professional Services: 78/100 average
Professional services firms benefit from two structural advantages. First, they tend to be referenced extensively in business press, case studies, and industry publications — sources AI tools weight heavily. Second, consulting firms have historically invested in thought leadership content: research reports, methodology documentation, and detailed how-to guides that are structurally ideal for AI citation.
B2B SaaS and Technology: 74/100 average
Technology brands generally have strong digital footprints — active blogs, detailed product documentation, substantial press coverage, and consistent profiles on software review platforms. These signals accumulate into solid AI visibility. Several B2B SaaS brands in this analysis scored above 85/100, outperforming companies with ten times their revenue.
Financial Services: 69/100 average
Large financial institutions benefit from entity authority (extensive Wikipedia coverage, Knowledge Panel presence, high domain authority) even when their content isn't specifically structured for AI citation. The upper end of financial services is strong; the lower end is pulled down by smaller firms with poor structured data and inconsistent brand information.
E-commerce and Retail: 61/100 average
E-commerce brands show more variance than any other sector. Direct-to-consumer brands with strong content strategies and community presence often outperform large retailers that rely on catalogue-style product pages rather than structured, informational content.
Local Services: 31/100 average
This is the sharpest underperformance gap in the data. Local services businesses — tradespeople, independent retailers, location-specific service providers — have minimal representation in AI training data and typically lack the content infrastructure that drives citations. AI tools have fundamentally less to draw from about hyper-local businesses, and the brands themselves have rarely invested in the signals that would improve this.
Healthcare and Medical: 44/100 average
Healthcare brands face a combination of structural challenges: conservative content strategies driven by regulatory constraints, limited third-party citation beyond directory listings, and AI tools' tendency to defer to general health authorities (NHS, Mayo Clinic, WebMD) rather than individual healthcare brands on medical queries.
Hospitality and Travel: 48/100 average
Travel brands show low AI visibility relative to their size and digital investment. The primary issue is content type: large travel sites are optimised for transactional search, not informational queries, and lack the structured how-to and FAQ content that drives AI citation in discovery and planning queries.
Across all commercial-intent prompts in this analysis, Gemini cited brands 23% more frequently than ChatGPT. This aligns with Gemini's deep integration with Google's commercial index and its tendency to surface product and brand recommendations on purchase-adjacent queries.
For B2B brands, Gemini is the platform with the highest citation rate on "what tool should I use for X?" style queries. If commercial citation is your primary goal, Gemini performance should be the priority measurement.
Perplexity cited the widest range of brands across the analysis — including smaller, newer, and less well-known brands that rarely appeared in ChatGPT or Gemini responses. This is consistent with Perplexity's aggressive real-time web crawling and its tendency to cite sources that directly answer the query, regardless of the source's domain authority tier.
The practical implication: Perplexity is the most accessible platform for brands that haven't yet built significant entity authority. Direct-answer content on a well-crawled site can generate Perplexity citations relatively quickly, even without the broader brand signals required for ChatGPT or Gemini presence.
ChatGPT showed the most conservative citation behaviour — it cited the fewest distinct brands and concentrated its citations most heavily on well-established names. This reflects its stronger weighting of training data authority signals compared to Perplexity's real-time web approach. New or smaller brands find ChatGPT the hardest platform to break into without substantial third-party mention history.
Across all platforms, brands cited by Claude tended to receive more accurate descriptions of their product and positioning. Claude's tendency toward careful, nuanced responses translates into more precise brand descriptions — but also a higher threshold for initial citation. Brands cited by Claude tend to have stronger entity consistency across sources.
One of the more striking findings in this analysis is the weak correlation between brand size and AI visibility score. Large enterprise brands do not reliably outperform mid-market brands on AI citations.
Several mid-market B2B SaaS companies in this analysis scored above 80/100 — outperforming Fortune 500 companies in adjacent categories. The explanation is consistent across cases: the mid-market brands had invested specifically in AI-citation-oriented content (structured, answer-led, topic-cluster architecture, FAQ schema), while the enterprise brands relied on brand recognition and domain authority that doesn't automatically translate into AI citation performance.
This is one of the most important findings for practitioners: AI visibility is a leveller. The signals AI tools respond to — content structure, entity consistency, topical depth, schema implementation — are as accessible to a 50-person company as a 5,000-person one. Large brands have advantages in entity authority from sheer volume of press coverage and Wikipedia presence, but these can be partially offset by deliberate investment in content quality and structure.
Across all 60 brands, the following signals showed the strongest positive correlation with AI visibility scores above 75/100:
The lowest adoption rates among all tracked brands were:
llms.txt file: 8% adoption. This specification, which allows brands to explicitly communicate content structure and permissions to AI tools, has seen minimal uptake despite growing awareness. Brands that have implemented a valid llms.txt file showed measurably better citation accuracy — AI tools that respect the file have a clearer picture of the brand's content structure.
AI crawler access: 70% of brands fully open; 30% had partial or complete AI crawler blocks. Many of these blocks were unintentional — legacy robots.txt configurations that were never updated to account for AI crawlers like GPTBot, ClaudeBot, PerplexityBot, and Google-Extended. Brands with partial AI crawler blocks showed significantly lower Perplexity citation rates, where real-time web crawling is central to the platform's operation.
FAQPage schema: 34% adoption. Despite wide awareness of FAQ schema as an AEO tactic, two-thirds of tracked brands have not implemented it on their answer-structured content pages.
The priority is fundamentals: unblock AI crawlers, implement Organisation schema, ensure brand information consistency across review platforms, and publish direct-answer content on your top 10 target queries. These actions alone can produce 15–25 point score improvements within 60–90 days.
This is the most common range — decent baseline visibility, but significant room for improvement. The focus should be entity strengthening (Wikipedia/Wikidata if applicable, Knowledge Panel verification) and topical depth (building out content clusters rather than isolated blog posts). Adding llms.txt and reviewing robots.txt for inadvertent AI crawler blocks are high-impact, low-effort actions.
Brands in this range typically have strong fundamentals and solid entity presence. The focus shifts to platform-specific optimisation, prompt coverage expansion (identifying and targeting queries you're not yet appearing in), and monitoring share of voice relative to competitors.
Every metric in this report is measurable. If you don't know where your brand currently sits, that's the first problem to solve.
Surfaceable runs a free AI visibility audit that gives you your current presence rate, position score, and the specific gaps — missing schema, crawler blocks, entity signal weaknesses — that are holding down your score. The brands that act on this data now are the ones that will hold defensible AI citation positions when AEO becomes standard practice.
The average brand in this analysis scored 62/100. That leaves substantial room for improvement — and the brands building that advantage today are the ones that will still hold it in three years.
Surfaceable is built for
Try Surfaceable
See how often ChatGPT, Claude, Gemini, and Perplexity mention your brand — and get a full technical SEO audit. Free to start.
Get started free →Agentic SEO: How to Make Your Brand Discoverable by AI Agents
AI agents are browsing the web and making decisions on behalf of users. Here's what agentic SEO is, why it matters now, and 5 concrete steps to make your brand agent-discoverable.
The State of AEO in 2026: Adoption, Benchmarks, and What's Coming Next
An industry analysis of where Answer Engine Optimisation stands in 2026 — adoption rates, which sectors are ahead, key trends, and what the next 12 months will bring.
How to Do AEO: A Practical Answer Engine Optimisation Strategy
A step-by-step guide to implementing AEO (Answer Engine Optimisation). Learn how to structure content, implement schema, and make your brand the source AI cites.