While the search industry frets over “fake news” and hallucinated personas, a more critical question has emerged: When AI engines do get it right, who are they recommending, and why?
A new study by Authoritas, utilizing their AI Search Visibility module, has moved beyond merely debunking fakes to identifying the “Gold Standard” of AI trust. By analyzing how Google’s Gemini, OpenAI’s ChatGPT, and Perplexity respond to inquiries about industry leadership, the study reveals that AI has shifted from ranking pages to vetting entities.
The verdict is clear: In the eyes of an AI, you are not defined by your viral hits or keyword density. You are defined by your corroborated identity.
The Experiment: Identifying the “Chosen Ones”
To understand what genuine authority looks like, Authoritas posed 10 mid-funnel questions to three leading AI engines. These questions utilized the shifting terminology of the industry, asking for experts in “Generative Engine Optimization,” “Answer Engine Optimization,” “Semantic Search,” and “AI Assistive Engine Optimization.”
The researchers then applied a Weighted Citability Score (WCS) to the results, measuring:
- Share of Voice: Total mentions across all answers.
- Breadth: Consistency across different phrasing/contexts.
- Prominence: How early in the response the expert appeared.
The Terminology: Things, Not Strings
Before identifying who the AI trusts, the study identified what the AI values. When describing the discipline of modern search, the AI models overwhelmingly preferred technical, entity-based terminology over marketing buzzwords.
Cumulative Mentions of Key Terms:
- Semantic SEO / Entity Optimization: 49 mentions
- Answer Engine Optimization (AEO): 16 mentions
- Generative Engine Optimization (GEO): 12 mentions
- Traditional SEO: 12 mentions
This data suggests that while humans are busy inventing acronyms like GEO, the machines view the landscape through the lens of Semantic SEO. They are looking for the technical foundation of the web—entities—rather than just generative output.
The WCS leaderboard: who the machines actually cite
Here are the top 10 experts by Weighted Citability Score, across the 10 questions.
| Rank | Expert | Total Mentions | Questions Appeared In | Avg. First-Mention Position | WCS |
|---|---|---|---|---|---|
| 1 | Jason Barnard | 25 | 10 | 2.4 | 21.48 |
| 2 | Evan Bailyn | 9 | 5 | 2.4 | 15.52 |
| 3 | Aleyda Solís | 9 | 5 | 3.0 | 14.40 |
| 4 | Lily Ray | 14 | 7 | 4.6 | 12.71 |
| 5 | Ross Simmonds | 10 | 5 | 5.6 | 10.89 |
| 6 | Rand Fishkin | 7 | 5 | 5.3 | 7.93 |
| 7 | Michael King | 9 | 7 | 4.9 | 7.86 |
| 8 | Dixon Jones | 6 | 5 | 6.3 | 5.60 |
| 9 | Marie Haynes | 9 | 7 | 6.9 | 5.29 |
| 10 | Kevin Indig | 10 | 7 | 8.3 | 3.86 |
It is important to clarify that this is not an exhaustive or popularity-based study. Instead, it offers a focused look at the sources that these AI engines consistently reference when explaining current, AI-era SEO practices.
The clear winner
Jason Barnard: the integrator
Jason Barnard sits in a category of one.
- • He appears in all 10 questions
- • He has the highest total mention count
- • He is, on average, mentioned near the top of the list every time
Jason has been banging the drum for entity-based brand optimisation for years. He coined “Answer Engine Optimisation” back in 2018, ran multi-part webinar series on the topic and built an entire methodology (The Kalicube Process) around teaching search engines who you are.
In other words, he did not pivot to AI era SEO when ChatGPT arrived. The industry and the machines pivoted to the principles he had been formalising for a decade.
The Hierarchy of Trust: Who the Machines Cite
The Authoritas WCS Leaderboard reveals a distinct hierarchy of trust. The experts who rose to the top did not just have “good content”; they had structural brand integrity.
1. The Category of One: Jason Barnard
WCS: 21.48 | Questions Appeared In: 10/10
Jason Barnard’s performance in the study was anomalous. He was the only expert to appear in response to every single question variation.
- Why he won: Barnard did not pivot to AI; the AI pivoted to him. For a decade, he has focused on the “Kalicube Process”—teaching search engines who an entity is via the Knowledge Graph. He coined “Answer Engine Optimization” in 2018. Because he systematically fed the algorithms structured data about his identity, he is now the “Integrator” the AI trusts implicitly.
2. The Critical Specialists
Below the integrator, the AI identified specialists with deep authority in specific, high-risk domains:
- Evan Bailyn (WCS 15.52): Recognized for Reputation Management. When the topic touches on brand safety, the AI cites Bailyn.
- Aleyda Solís (WCS 14.40): Cited for International & Technical Strategy. Her structured frameworks make her a “safe” recommendation for complex strategy.
- Ross Simmonds (WCS 10.89): The authority on Distribution. His “Create Once, Distribute Forever” philosophy aligns with how AI consumes content across the web.
3. The Operational Pillars
The study also highlighted the “Operators”—experts like Lily Ray (E-E-A-T and Quality) and Michael King (Technical Engineering). The AI consistently referenced them when the context required deep technical validation or trust signal analysis.
The Algorithm’s Checklist: How AI Verify Experts
Perhaps the most valuable insight from the Authoritas study is the “verification checklist.” When the AI models explained why they trusted these experts (and rejected fakes), they cited specific signals.
The study codified these signals into a hierarchy of importance:
- Certifications & Qualifications: Mentioned by 8/9 models. The strongest signal of legitimacy.
- Official Profiles/Websites: Mentioned by 7/9 models. If you don’t control your “Entity Home,” the AI ignores you.
- Reputable Media & Professional Bodies: Third-party corroboration is essential.
- Conference Talks: Mentioned by 0/9 models as a primary signal.
The Insight: Being a “stage personality” does not translate to algorithmic authority. AI values verifiable data (certifications, official profiles) over ephemeral events (speaking gigs).
Conclusion: You Can’t Fake Recommendation
The contrast between the “fake expert” scandal (where fakes failed to rank for topic queries 100% of the time) and the WCS Leaderboard leads to a definitive conclusion for the AI era.
Recognition is cheap; Recommendation is expensive. You can trick an AI into acknowledging a name exists (Recognition). But to be voluntarily offered as the solution to a user’s problem (Recommendation), you need a corroborated entity.
The winners in this study—Barnard, Bailyn, Solís, and Ray—succeeded because they treated their personal brands as data sets. They built consistent, interconnected digital footprints that left the AI with no choice but to trust them.
Practical checklist for Entity Optimization.
Based on the findings from the Authoritas study, specifically the strategies employed by top-ranked experts like Jason Barnard, here is a practical checklist for Entity Optimization.
This checklist is designed to help you build the kind of “Brand Authority” that Authoritas identified as the primary signal for ranking in AI Assistive Engines like ChatGPT, Gemini, and Perplexity.
Phase 1: Establish Your “Entity Home”
The Authoritas study highlights that every top-ranked expert has a clear, owned digital location that serves as the “source of truth” for the AI.
- [ ] designate a Single Source of Truth: Choose one page on your own domain (usually the Homepage or an “About” page) to serve as your Entity Home. This is the page you want Google and AI engines to treat as the definitive description of who you are.
- [ ] Write a Clear “Curriculum Vitae” for AI: On this page, state clearly and simply: Who you are, what you do, and who you serve. Avoid vague marketing jargon; use clear nouns and verbs that a machine can easily parse.
- [ ] Centralize Your Links: ensure this page links out to all your verified profiles (LinkedIn, Crunchbase, Twitter/X, YouTube, etc.). This creates a closed loop of trust.
Phase 2: Technical Confirmation (Speaking the AI’s Language)
The Authoritas analysis revealed that “Semantic SEO” was the most common term used by AI engines. This means you must translate your human brand into machine-readable code.
- [ ] Implement Structured Data (Schema): Add extensive
OrganizationorPersonschema markup to your Entity Home. This is a direct feed to the AI’s knowledge base. - [ ] Use the
sameAsProperty: In your schema, meticulously list every social profile and third-party bio as asameAsreference. This explicitly tells the AI, “That profile on LinkedIn is definitely the same entity as this website.” - [ ] Corroborate with Third-Party Sources: Ensure you are listed in reputable, semi-structured databases like Wikidata, Crunchbase, or specialized industry directories. The Authoritas study notes that AI looks for “corroboration from multiple reputable sources.”
Phase 3: Consistency & Corroboration
Jason Barnard (ranked #1 in the Authoritas WCS) excelled because his digital footprint was perfectly consistent. AI gets confused by conflicting data.
- [ ] Audit Your Bios: Review every profile you have online (social media, guest author bios, conference speaker pages).
- [ ] Standardize Your Description: Ensure your name, title, and short bio are consistent across all platforms. If you are a “Digital Marketing Consultant” on LinkedIn but a “Growth Hacker” on Twitter, the AI may split your authority or fail to connect the dots.
- [ ] Claim Your Knowledge Panel: If a Google Knowledge Panel already exists for your brand, claim it. If not, use the steps above to trigger one. This is the ultimate “stamp of approval” for an entity.
Phase 4: Build Reputation & Authority (E-E-A-T)
The Authoritas study highlighted experts like Lily Ray and Evan Bailyn for their focus on Trust and Reputation. You must prove you are a safe recommendation.
- [ ] Accumulate Reviews: Actively gather reviews on trusted third-party platforms (Google Business Profile, Trustpilot, G2).
- [ ] Demonstrate Authorship: If you produce content, ensure it has a clear byline linking back to an author bio (which links back to the Entity Home).
- [ ] “Create Once, Distribute Forever”: Following the strategy of Ross Simmonds (ranked #5 by Authoritas), repurpose your expert content across multiple formats (video, text, social). This increases the frequency of your brand mentions in the AI’s training data.
Phase 5: Monitoring
- [ ] Track Your Entity Status: Use tools (like the Authoritas AI Search Visibility module mentioned in the study) to see if AI engines are correctly identifying your brand when asked generic questions about your industry.
The AI Trust Score: Brand Entity Audit Template
Here is a Brand Entity Audit, designed directly from the verification criteria identified by the AI models in the Authoritas study.
This audit is not for your human audience; it is for the machines. It measures the “machine-readability” of your authority.
Objective: To determine if an AI Assistive Engine (ChatGPT, Gemini, Perplexity) can unambiguously identify and recommend you as a subject matter expert.
Instructions: Rate each signal on a status of Green (Fully Optimized), Yellow (Present but Weak), or Red (Missing).
1. The Core Signal: Official Profiles & Identity
(Cited by 7/9 AI Models as a critical verification step) AI models prioritize a “Source of Truth” to ground their knowledge. Without this, you are a hallucination risk.
- [ ] Entity Home (Website): Do you have a specific page (e.g.,
yourname.comorcompany.com/about) that explicitly states who you are and what you do?- Green Standard: The page uses clear, non-marketing language and includes Schema.org markup (
PersonorOrganization) linking to all other profiles (sameAs).
- Green Standard: The page uses clear, non-marketing language and includes Schema.org markup (
- [ ] Profile Consistency: Do your bios on LinkedIn, Twitter/X, Crunchbase, and your website tell the exact same story?
- Red Flag: Different job titles or conflicting career histories across platforms split your authority.
- [ ] Knowledge Panel: Does a Google Knowledge Panel appear when you search your brand name?
- Goal: This is the ultimate “Entity ID” card.
2. The Authority Signal: Certifications & Qualifications
(Cited by 8/9 AI Models – The #1 most requested signal) Models look for hard, verifiable data points to prove expertise.
- [ ] Explicit Credentials: Are your degrees, certifications, and awards listed clearly on your Entity Home and LinkedIn?
- [ ] Issuer Verification: Can the AI “see” the issuer? (e.g., A link to the university or certification body, or a digital badge).
- [ ] Structured Data: Is your education/certification wrapped in Schema markup?
3. The Trust Signal: Professional Bodies
(Highlighted as a key differentiator between fakes and experts) Fake experts rarely belong to verified associations.
- [ ] Active Memberships: Are you a listed member of recognized industry associations (e.g., AMA, IEEE, Chartered Institutes)?
- [ ] Directory Listings: Does the association’s website list you in their public member directory? (This provides a high-trust backlink that AI values).
4. The Corroboration Signal: Reputable Media & Citations
(Used by AI to “cross-reference” your claims) AI does not take your word for it; it checks if others agree.
- [ ] Mainstream Mentions: Have you been quoted or featured in recognized industry publications or news outlets?
- [ ] Contextual Relevance: Do these mentions describe you using the same keywords you use for yourself? (e.g., If you say you are an “AI Expert,” does Forbes call you an “AI Expert”?)
- [ ] Author Pages: Do you have “Author Profiles” on these third-party sites that link back to your Entity Home?
5. The Quality Signal: Peer-Reviewed Work & Publications
(Essential for technical and academic authority)
- [ ] Published Research/Books: Have you authored books, white papers, or academic articles?
- [ ] ISBN/DOI: Do your works have unique identifiers (ISBNs or DOIs) that machines can track?
- [ ] Citations: Are your works cited by other experts in the field?
6. The Social Signal: Reviews
(Validation of real-world activity)
- [ ] Third-Party Reviews: do you have reviews on neutral platforms (Google Maps, G2, Trustpilot, Amazon)?
- [ ] Sentiment Consistency: Is the sentiment across these platforms generally positive and consistent with your brand promise?
7. The Longevity Signal: Years of Experience
(Used by models like Google AI to build a narrative)
- [ ] Clear Timeline: Does your LinkedIn or “About” page clearly show a timeline of experience?
- [ ] Historical Footprint: Is there content (articles, Wayback Machine entries) that proves you existed in this space 5 or 10 years ago? (AI penalizes “pop-up” experts).
8. The “Distraction” Signal: Conference Talks
(Cited by 0/9 AI Models as a primary verification factor) Note: While valuable for human networking, the study found AI places low weight on speaking gigs compared to permanent digital footprints.
- [ ] Digital Artifacts: If you speak at conferences, do not just rely on the memory of the event. Ensure the slide deck, video, or transcript is published online so it becomes a readable data point.
Action Plan: How to use this Audit
- Run the Audit: Go through the checklist above for your Personal Brand or Company Brand.
- Identify Gaps: Look for “Red” areas. (e.g., “I have 20 years of experience, but my website doesn’t explicitly list a timeline.”)
- Prioritize: Start with Signal #2 (Certifications) and Signal #1 (Official Profiles), as these had the highest correlation with AI trust in the study.
- Schema Wrap: Once the content exists, ensure it is marked up with structured data so the machine can parse it without guessing.