Featured Article

How to Rank in ChatGPT Answers

How to Rank in ChatGPT Answers

If your buyers ask ChatGPT, Gemini, or Perplexity which tool to choose—and your brand isn’t in the answer—you’re invisible at the exact moment of intent. This guide shows how to earn inclusion, win citations, and turn AI answers into pipeline with AI Engine Optimization (AEO) and Project 40.

Why AI Answers Are the New Front Page of the Internet

Buyer research has shifted from keyword searches to conversational questions. Instead of scanning ten blue links, people ask, “What’s the best X for Y?” and expect a concise, cited recommendation inside an AI answer engine (AAE) such as ChatGPT, Gemini, or Perplexity. These assistants synthesize evidence, cite sources (Perplexity by default), and increasingly drive shortlists for SMB and B2B purchases.

  • Discovery now starts with questions, not queries. Assistants compress research into answer cards with links and citations.
  • Shortlists form earlier. A single assistant response can elevate or exclude your brand before a prospect ever visits your site.
  • Local and B2B converge. Daily questions like “best roofer near me offering same‑day repair” or “top intent alternatives for startups” are answered conversationally.

See where you stand in AI answers

Get your personalized AI Visibility Report from Project 40 and identify the prompts and models where you’re missing.

Suggestion for desktop UI: show a persistent sidebar CTA “Get your AI Visibility Report” linking to /ai-visibility-report.

Diagram showing a buyer prompt flowing into retrieval, grounding, model reasoning, citations, and an answer box with links
From prompt to answer: assistants ground responses with retrieval and evidence before generating text.

How ChatGPT Chooses Brands to Mention

Large language models don’t “crawl and rank” like classic search engines. For current, factual answers, assistants ground outputs with retrieval and tools, then cite evidence.

What feeds the answer box:

  • High‑signal pages: concise claims, concrete evidence, and updated facts.
  • Structured data: JSON‑LD for Organization, Product, FAQ, and HowTo (schema.org).
  • Third‑party corroboration: reviews, analyst mentions, public datasets, and consistent facts across profiles (e.g., LinkedIn, Crunchbase, G2, GitHub).
  • Entity clarity: unambiguous Organization and Product entities with consistent names, NAP (name/address/phone), and canonical URLs.
  • Topical authority: coverage of key questions and comparisons within your category.
  • Freshness: recent updates, release notes, and change logs to signal currency.

Why many brands stay invisible:

  • Fragmented claims scattered across multiple pages.
  • No entity linking or schema—assistants can’t resolve who you are and what you sell.
  • Outdated details (pricing, features, leadership) that conflict across sources.
  • Thin topical coverage and no comparison/alternatives content.

AEO vs SEO: What Actually Changes

AI Engine Optimization (AEO) is the discipline of earning inclusion and positive citations in assistant answers. It complements SEO but shifts the focus from ranking pages to being named as the best entity for a given job‑to‑be‑done.

  • What carries over: E‑E‑A‑T principles, reviews and reputation, internal linking, technical hygiene, and fast, secure pages.
  • What’s new in AEO: prompt taxonomies, answer patterns, retrieval‑ready pages, multi‑model coverage (ChatGPT, Gemini, Perplexity), explicit evidence blocks, and alignment with assistants’ fact‑checking behavior.

New KPIs to track:

  • Share of Assistant Answers (SAA): the percentage of tested prompts where your brand is included or cited.
  • Citation rate: how often your pages are linked in answers (especially in Perplexity).
  • Brand mention sentiment: positive/neutral/negative tone in assistant descriptions.
  • Chat‑to‑site CTR: clicks from assistant answer cards to your site.
  • Assistant‑sourced pipeline: opportunities and revenue tagged to assistant‑initiated sessions or self‑reported attribution.

The 7‑Step Playbook to Appear in AI Answers

1) Map Buyer Prompts and Jobs‑To‑Be‑Done

Group prompts by intent. Build a prompt library you can test and optimize against each month.

Intent clusters and prompt examples (copy‑ready):

Problem discovery

  • “How can I increase ChatGPT brand visibility without ads?”
  • “Ways to generate leads from ChatGPT or Gemini for a B2B SaaS startup.”
  • “What is an AI visibility platform and who needs it?”
  • “How do I appear in AI answers for local services?”
  • “Best approach to AI search optimization for SMBs.”
  • “How to improve Perplexity citations for my product.”
  • “Entity SEO vs AEO: what’s the difference?”
  • “How to rank in ChatGPT answers for category keywords.”
  • “Win more organic leads from AI engines.”
  • “Reduce dependency on ABM ads using AI engines.”

Comparison

  • “Project 40 vs 6sense for AI visibility.”
  • “Apollo.io vs Project 40: lead gen from AI assistants.”
  • “Lusha vs Project 40 for SMB growth.”
  • “RollWorks alternatives that help with AI search optimization.”
  • “Best AI visibility platforms for B2B.”
  • “AEO tools compared: features and pricing.”
  • “Competitor AI visibility report tools.”
  • “Which platforms increase visibility in Gemini?”
  • “ABM vs AEO: when to use each.”
  • “Top tools to generate leads from ChatGPT.”

Alternatives

  • “Best alternatives to 6sense for AI answer visibility.”
  • “Apollo.io alternatives for AI‑driven lead generation.”
  • “Lusha alternatives for AI SEO for small businesses.”
  • “RollWorks alternatives focused on AEO.”
  • “Non‑ad channels for B2B demand using AI engines.”
  • “Tools to appear in ChatGPT and Perplexity answers.”
  • “Platforms to monitor competitor share of answers.”
  • “How to build AI landing pages at scale.”
  • “Entity optimization tools for Organization/Product.”
  • “AEO playbook templates.”

Local

  • “Best plumber in Austin that offers same‑day repair.”
  • “Top-rated roofer near me with emergency service.”
  • “Accountant in Denver specializing in startups.”
  • “IT support in Chicago with 24/7 help desk.”
  • “Marketing agency in Phoenix that optimizes for AI search.”
  • “Which local dentists take Saturday appointments?”
  • “Nearby landscaper that handles xeriscaping.”
  • “Best HVAC company in Raleigh for heat pumps.”
  • “Window repair in Seattle with same‑week availability.”
  • “Affordable wedding photographer in Miami with reviews.”

Pricing

  • “How much do AI visibility platforms cost?”
  • “Project 40 pricing model vs ABM tools.”
  • “Budgeting for AI search optimization at an SMB.”
  • “Cost to generate leads from ChatGPT vs ads.”
  • “Total cost of ownership: Project 40 vs 6sense.”
  • “What’s included in an AI Visibility Report?”
  • “Are AI landing pages included or charged per page?”
  • “Do I need separate tools for local AEO?”
  • “Team and time required for AEO.”
  • “Scaling AEO across multiple markets.”

Implementation

  • “How to implement JSON‑LD Product schema for AEO.”
  • “Set up an answer‑ready page for ‘best AI visibility platforms.’”
  • “Create a comparison page: Project 40 vs 6sense.”
  • “How to measure Share of Assistant Answers (SAA).”
  • “How to track assistant‑sourced traffic with UTMs.”
  • “How to get Perplexity citations to my page.”
  • “Building a facts page that assistants can trust.”
  • “AEO for multi‑location SMBs.”
  • “AEO for agencies managing multiple clients.”
  • “Maintain a public changelog for assistants.”

2) Audit Your Current AI Visibility (Baseline)

Start with a benchmark across models and intents.

  1. Pick 50–100 prompts from your library across the six intents.
  2. Manually test in ChatGPT, Gemini, and Perplexity. Record if your brand is mentioned and whether your page is cited.
  3. Log answer phrasing: what claims are used, which competitors appear, and which sources are cited.
  4. Repeat monthly to track SAA and citation rate.

Faster option: request Project 40’s AI Visibility Report to quantify SAA, model‑by‑model gaps, competing entities, and the exact evidence you’re missing.

3) Structure Your Brand as an Entity

Make it easy for assistants to resolve who you are and what you offer.

  • Implement JSON‑LD for Organization and Product. Include official name, URL, logo, sameAs profiles, and product specs.
  • Maintain consistent NAP across your site and third‑party profiles (LinkedIn, Crunchbase, G2, GitHub).
  • Create a machine‑verifiable facts page (e.g., “About Project 40: Fast Facts”) with founding year, HQ, feature set, pricing model, integrations, and support SLAs.
  • Link to authoritative third‑party profiles and documentation from your facts page.
  • Publish a public changelog and release notes to signal freshness.

4) Build Answer‑Ready Pages

Create pages that align to the exact prompts you want to win. Each page should make a clear claim, supply evidence, and include schema.

  • Target examples: “best AI visibility platforms,” “tools to appear in ChatGPT,” “local plumber in Austin using AI answers.”
  • Include: concise claim, proof (case stats, third‑party quotes), comparison table, FAQs, and schema (Product, FAQPage, HowTo), plus clear CTAs.
  • Ensure retrieval readiness: plain language headings, scannable lists, and evidence blocks that can be cited verbatim.

Scale with Project 40: use the AI Landing Page Generator to publish optimized pages at scale, complete with dynamic reviews and spec blocks.

5) Create Comparison and Alternative Content (Without FUD)

Buyers ask assistants to compare tools. Meet that need with fair, transparent comparisons.

  • Write “Project 40 vs 6sense/Apollo/Lusha/RollWorks” pages with clear criteria: focus, capabilities, pricing approach, best‑fit segments.
  • Publish “Best alternatives to 6sense for AI visibility” and keep listings up to date.
  • Emphasize differences: ABM/data enrichment vs AI answer visibility; ad‑spend reliance vs organic AI demand capture.
Project 40 vs ABM/data tools (fair comparison)
Criteria Project 40 6sense Apollo.io Lusha RollWorks
Core focus AI visibility platform (AEO) ABM & intent data Sales engagement & B2B data B2B contact data enrichment ABM & advertising
AI engine coverage Optimizes inclusion in ChatGPT, Gemini, Perplexity Not focused on AI answer inclusion Not focused on AI answer inclusion Not focused on AI answer inclusion Not focused on AI answer inclusion
Competitor monitoring (AI answers) Tracks rival share of answers and citations Intent signals; AI answers not core Not core Not core Intent/ads; AI answers not core
Landing page automation AI Landing Page Generator for answer‑ready pages Not a feature Not a feature Not a feature Not a feature
Local prompt targeting Built for multi‑location SMBs Not core Not core Not core Not core
Measurement SAA, citation rate, assistant‑sourced pipeline Intent, ad and campaign metrics Engagement and sequences Data coverage and accuracy Ad impressions and account metrics
Cost of ownership Organic acquisition channel Media and data spend typical Licenses; not an organic channel Licenses; not an organic channel Media and data spend typical

Notes: Information reflects each vendor’s public positioning at a high level. Always verify features and pricing on the vendor’s website.

6) Strengthen Third‑Party Evidence

  • Encourage and syndicate reviews (e.g., G2, Capterra). Quote customers and link to verified profiles.
  • List integrations with documentation links.
  • Seek mentions from reputable industry bodies and associations; keep facts consistent everywhere (name, features, pricing model).

7) Monitor, Optimize, and Defend Your Narrative

  • Set a monthly cadence by prompt cluster and model. Track SAA, citation rate, and sentiment.
  • Use Project 40’s Competitor Analysis Engine to monitor rival share of answers and close gaps.
  • Iterate headlines, claims, and schema with Content Optimization Tools.
  • Maintain a public changelog of facts so assistants “see” updates; avoid sensational claims that can trigger hallucination.
Circular diagram of the AEO cycle: map prompts, build entity clarity, publish answer-ready pages, strengthen evidence, measure SAA, optimize, and defend narrative
The AEO playbook: a continuous cycle you can operationalize with Project 40.

Local SEO Meets AEO: Winning Daily Buyer Questions

For SMBs, assistants often answer with a short list plus citations. To earn a spot:

  • Create city/service AI pages with embedded NAP, reviews, service radius, and photos.
  • Add prompt‑driven FAQs like “Who’s the best roofer near me that offers same‑day repair?” and answer in plain language.
  • Include local proofs: map embeds, licensing and insurance details, hours, and emergency options.
  • Keep details consistent across Google Business Profile, site schema, and third‑party directories.

Product‑led path: Project 40’s SMB Growth Agent automates local prompt targeting, AI landing pages, and review syndication to help you appear in “near me” assistant answers.

Implementation Checklist (Copy‑Paste)

  • Create a 50–100 prompt library across six intents.
  • Benchmark SAA and citation rate in ChatGPT, Gemini, and Perplexity.
  • Publish an “About: Fast Facts” page with machine‑verifiable details.
  • Add Organization and Product JSON‑LD; include sameAs links.
  • Ensure consistent NAP across your site and profiles.
  • Stand up answer‑ready pages for top prompts (claim + evidence + schema).
  • Use the AI Landing Page Generator to scale pages.
  • Build fair comparison pages (Project 40 vs 6sense/Apollo/Lusha/RollWorks).
  • Publish “Best alternatives” and keep criteria transparent.
  • Add FAQPage and HowTo schema to relevant sections.
  • Integrate reviews and third‑party quotes; link to sources.
  • List integrations with docs; keep a public changelog.
  • Implement internal linking to your facts and comparison pages.
  • Set monthly measurement: SAA, citation rate, sentiment, chat‑to‑site CTR.
  • Activate Competitor Analysis Engine monitoring.
  • Iterate copy and schema with Content Optimization Tools.
  • Launch local AI pages with NAP, reviews, service radius, and photos.
  • Align Google Business Profile with on‑site facts and schema.
  • Establish review operations (ask, respond, syndicate).
  • Set attribution plan with UTMs and self‑reported attribution.
  • Create a welcome page for assistant traffic (see tracking plan below).
  • Document governance: who updates facts, when, and where.
  • Re‑test prompts after each site change or release.
  • Maintain multi‑model coverage (ChatGPT, Gemini, Perplexity).
  • Quarterly audit of entity consistency across the web.

Case Snapshot (Replace with Real Data)

Use this template to communicate outcomes without speculation. Replace bracketed text with your real numbers.

  • Context: “[Category] SaaS targeting SMBs in [region].”
  • Baseline: “SAA across ‘best [category] tools’ prompts at [X]% across models.”
  • Intervention: “Published [N] answer‑ready pages and [N] comparison pages; added Organization/Product schema; consolidated facts.”
  • Result: “SAA increased to [Y]% and assistant‑sourced demos changed by [Δ]% within [period].”
  • Notes: “Top citations were [URLs]. Biggest gains came from [prompt cluster].”

FAQs

Does ChatGPT have “SEO”? What is AEO?
There isn’t traditional SEO for ChatGPT. Assistants ground answers with retrieval and tools, then cite evidence. AEO is optimizing entities, evidence, and answer‑ready content so assistants can confidently include and cite your brand.
How long does it take to appear in answers?
Time varies by category and the strength of your evidence. Publishing answer‑ready pages and aligning entities can produce inclusion in some prompts relatively quickly, while competitive prompts take longer. Measure progress monthly.
Is optimizing for AI answers allowed by AI providers?
Following provider guidelines and using accurate, verifiable information is encouraged. Avoid manipulative tactics; focus on clarity, evidence, and consistency.
How do I measure success if models don’t send referrer data?
Use prompt testing to track SAA and citation rate. Attribute traffic with UTMs, welcome pages, and a self‑reported “How did you hear about us?” field that includes “ChatGPT/Gemini/Perplexity.”
Will AEO replace SEO?
No. SEO and AEO complement each other. Classic search still matters; assistants increasingly influence shortlists. Winning both makes your brand resilient.

Try Project 40: Make AI Answers Your New Lead Channel

Project 40 is an AI visibility platform that helps brands become the #1 answer in AI engines and convert assistant attention into pipeline.

Bonus: Download a Prompt Library for Your Industry

Get a ready‑to‑use prompt set for your market. Gated sample recommended for subscriptions growth.

Attribution and UTM Plan

  • Use campaign utm_source=assistant, utm_medium=organic, utm_campaign=[model‑cluster] in internal links from answer‑ready pages you expect assistants to cite.
  • Create a welcome page at /welcome/assistant that reads UTMs and asks “Did you find us via ChatGPT, Gemini, or Perplexity?”
  • Add a self‑reported attribution field on demo forms including “ChatGPT/Gemini/Perplexity.”
  • Log prompt‑level testing results alongside analytics to correlate inclusion with traffic and pipeline.
Content & Research