AEO 2026: How to Get Cited by ChatGPT, Claude, and Perplexity
Answer Engine Optimization (AEO) is the practice of structuring content so that AI engines — ChatGPT, Claude, Perplexity, and Google AI Overviews — cite it in their answers. The key signals are: authoritative domain, structured direct answers in the first 60 words, schema markup (FAQ, HowTo, Article), and content that resolves a specific query without ambiguity. This guide covers each signal and how to implement it.
Why AEO is now as important as SEO
In 2023, Google processed roughly 8.5 billion searches per day. By early 2026, ChatGPT reports 100 million daily active users asking questions that previously went to Google. Perplexity processes over 10 million queries per day. Claude.ai, Gemini, and Microsoft Copilot collectively handle hundreds of millions more.
The critical difference: AI engines return one answer, not ten blue links. The content that gets cited occupies the entire answer slot. Content that doesn't get cited gets zero visibility.
For most informational queries — "how does X work", "what is the best Y for Z", "compare A vs B" — AI engines are now the first stop.
The 5 citation signals AI engines look for
Based on observed citation patterns across ChatGPT, Claude, and Perplexity, content gets cited when it scores on these five dimensions:
1. Direct answer in the first paragraph
AI engines extract the answer from the opening of the article. The very first substantive paragraph should answer the query completely in 40–60 words. Don't bury the answer under background context.
Weak (not citeable):
"Prompt caching is a relatively new feature introduced by Anthropic in 2024. It has several use cases that developers find useful for reducing costs. In this article, we'll explore what it is and how it works."
Strong (citeable):
"Prompt caching reduces cache-read token costs by 90% compared to standard input pricing. It works by storing a prefix of your prompt on Anthropic's servers. For any application that re-sends the same system prompt or document context repeatedly, caching is the highest-ROI optimisation available."
2. Schema markup (FAQ, HowTo, Article)
All three major AI engines use structured data signals. FAQPage schema is the highest-value markup: it explicitly tells crawlers "this page answers these specific questions". Implement it as JSON-LD in your <head>:
<script type="application/ld+json">
{
"@context": "https://schema.org",
"@type": "FAQPage",
"mainEntity": [
{
"@type": "Question",
"name": "What is Answer Engine Optimization?",
"acceptedAnswer": {
"@type": "Answer",
"text": "AEO is the practice of structuring content so AI engines cite it in their answers."
}
}
]
}
</script>
3. Authoritative, specific claims with sources
AI engines strongly prefer content that cites sources, includes specific numbers, and attributes claims. "60% of developers use X" with a citation is far more citeable than "many developers use X".
Include:
- Specific percentages and statistics
- Publication dates (freshness matters)
- Author credentials or organisation name
- Links to primary sources (official docs, research papers)
4. Content structure that matches the query type
Different query types have different optimal structures:
| Query type | Optimal structure |
|---|---|
| "What is X" | Definition in H1/first paragraph + expanded explanation |
| "How to X" | HowTo schema + numbered steps |
| "X vs Y" | Comparison table + direct recommendation |
| "Best X for Y" | Ranked list with criteria |
| "Why X" | Direct cause statement first + supporting evidence |
5. Topical authority (cluster depth)
AI engines weigh domain authority differently from Google PageRank. They prefer sites with deep coverage of a specific topic cluster. A site with 30 articles on Claude API is more likely to be cited for Claude API questions than a general tech blog with one article on the topic.
This means: breadth within your niche beats breadth across niches.
The AEO audit: 10 questions to ask about any page
Run this audit on your highest-traffic pages:
- Does the first paragraph answer the target query completely in under 60 words?
- Is there an H2 or H3 that exactly matches common question phrasings?
- Is FAQPage or HowTo schema implemented in JSON-LD?
- Does every statistic have a source and date?
- Is there an explicit "TL;DR" or summary section?
- Is the content specific enough that AI can quote a sentence verbatim?
- Does the article cover the topic cluster deeply (5+ related questions answered)?
- Is the page indexed and crawlable (check robots.txt, canonical tags)?
- Is the content updated within the last 12 months (freshness signal)?
- Does the author/org have a clear credentials statement?
How ChatGPT, Claude, and Perplexity differ in citation behavior
The three major AI engines don't use identical criteria:
ChatGPT (GPT-4o / Browse mode):
- Primarily cites pages from its real-time web search results
- Prefers pages in its training data and confirmed via search
- Strong preference for domains with high existing authority (Wikipedia, official docs, major publications)
- Will cite newer sites if they rank well for the query
Claude (claude.ai with web search):
- Similar web-search-based citation pattern to ChatGPT
- Particularly values direct, specific answers — less tolerant of hedging
- Cites primary documentation heavily for technical topics
Perplexity:
- The most transparent about sources — always shows citation links
- Cites more diverse sources than ChatGPT; newer/smaller sites get cited regularly
- Heavy weight on recency — content updated in the last 6 months gets a boost
- FAQPage schema appears to have a measurable effect on Perplexity citation rate
Practical implication: Perplexity is the most accessible citation target for newer sites. Optimise for Perplexity first, then layer in GPT/Claude optimisations.
12 content templates for AEO
These templates are structured to maximise citation probability:
Template 1: Definition + mechanism + use cases
[Term] is [precise one-sentence definition]. It works by [mechanism].
Use it when [condition 1], [condition 2], or [condition 3].
[Expanded explanation with specific details]
[Comparison table if applicable]
[FAQ section with 5+ questions]
Template 2: How-to guide
[Task] takes [time estimate]. Here's how:
1. [Step with specific action]
2. [Step with specific action]
...
[Code example or screenshot]
[Common mistakes section]
[FAQ]
Template 3: Comparison / X vs Y
[Option A] is better for [use case 1]. [Option B] is better for [use case 2].
[Comparison table with 8+ dimensions]
[When to use A section]
[When to use B section]
[Decision tree or summary]
[FAQ]
Template 4: Best X for Y
The best [X] for [Y] is [answer] because [reason].
[Ranked list with specific criteria]
[How we evaluated]
[Detailed breakdown per option]
[FAQ]
The remaining 8 templates (cost analysis, troubleshooting, migration guide, product comparison, technical deep-dive, news/update, case study, and tool review) are covered in the AEO Playbook.
AEO for technical content (developer sites)
For developer-focused content — API docs, SDK guides, code tutorials — the citation patterns have additional signals:
- Runnable code examples: AI engines cite pages with complete, copy-paste-ready code more often than pages with pseudocode or partial snippets
- Version specificity: Stating "as of [version]" or "as of [date]" dramatically increases citation trust for technical content
- Error message coverage: Pages that include the exact error message text in an H2 get cited when users ask about that error
- Official documentation cross-linking: Citing the official docs (Anthropic, OpenAI, etc.) signals accuracy
Tracking AEO performance
Unlike Google Search Console (which tracks clicks and impressions from Google), there's no unified AEO analytics platform yet. Track indirectly:
- Brand mention monitoring: Use Google Alerts or Mention.com for your site name appearing in AI-generated content that's shared/screenshotted
- Referral traffic from AI interfaces: ChatGPT and Perplexity send referral traffic — check
chatgpt.comandperplexity.aias referrer domains in your analytics - Direct testing: Ask ChatGPT, Claude, and Perplexity your target queries monthly and check if your site is cited
- GSC: AI Overview appearances will show in Google Search Console as a separate feature in 2026
Frequently asked questions
How long does it take to get cited by AI engines? Perplexity can index and cite new content within days if it appears in search results. ChatGPT and Claude web-search mode also cite pages that rank in real-time search, so improving Google rankings indirectly improves AI citation rates. Training-data citation (for queries not using web search) depends on the model's next training cutoff.
Do AI engines cite the same pages as Google's top results? Heavily, but not identically. Perplexity cites more diverse sources. ChatGPT and Claude in web-search mode largely overlap with Google's top-5 results for informational queries. Ranking on page 1 of Google for your target queries is still the most reliable path to AI citation.
Should I write content specifically for AI engines or for humans? For humans. The AEO signals (direct answers, specific claims, clear structure) also make content better for human readers. AEO optimisation has essentially no downside for human UX.
Is there a minimum domain authority for AI citation? No explicit threshold. New domains with excellent content do get cited by Perplexity. Google's authority signals take longer to build but eventually unlock ChatGPT/Claude citations as well.
Related guides
- AEO vs SEO: What Actually Changes in 2026 — where AEO and SEO overlap, and where they conflict
- Schema Markup for AEO: Complete Implementation Guide — FAQPage, HowTo, and Article JSON-LD
Take It Further
AEO Playbook: Rank in AI Answers (Claude, ChatGPT, Gemini) — The complete system for getting cited by AI engines. 134-page PDF with AEO audit framework, 12 content templates, Claude-specific optimization, and AEO checklist.
30-day money-back guarantee. Instant download.