Cited — 2026-04-24

The cited-newsletter unpacks the AI search citations shaping brand visibility this week.

Cited newsletter — 24 April 2026: AI citation shifts in Perplexity, Google AI Overviews, and what top brands are doing to win visibility.

Marketing professional reviewing the cited-newsletter for AI search insights to improve brand visibility across platforms

Welcome to this week's edition of Cited, our newsletter tracking what is actually moving in AI search visibility. We cover the signals, shifts, and structural changes that determine whether your brand gets cited by ChatGPT, Perplexity, Google AI Overviews, and Claude, or gets ignored entirely.

This week: a significant change to how Perplexity handles brand authority signals, an update to Google's AI Overview citation logic that rewards structured entity data, and a breakdown of what the fastest-rising brands in our Lua cohort are doing differently from everyone else.

What Changed This Week in AI Citation Behaviour

Perplexity is rewarding topical depth over domain authority

We've been monitoring citation patterns across our tracked brands since January, and a clear pattern has emerged in Perplexity's sourcing behaviour over the past three weeks. Brands with narrow but deep topical clusters are being cited at roughly 2.3x the rate of brands with broad, shallower content coverage, even when the broader sites carry significantly higher domain authority scores.

This isn't surprising if you understand how Perplexity's retrieval model works. It is optimised for answer precision, not brand prestige. A site that has comprehensively covered a specific niche, with consistent semantic depth across 15 to 20 tightly related pages, looks far more credible to the model than a high-authority generalist site with one or two relevant articles.

The practical implication: if your content strategy is still structured around broad keyword targets and domain-level authority building, you are probably not building AI visibility. You're building traditional SEO visibility, which overlaps less with AI citation logic than most people assume.

Google AI Overviews: structured entity data is now a differentiator

Google's AI Overviews have been updating their citation selection fairly quietly, but we tracked a notable shift this week. Brands with properly implemented schema markup for their core entities (organisation, product, service, FAQ, and how-to) are appearing in AI Overview citations at a meaningfully higher rate than those without.

We ran a quick comparison across 20 brands in our platform. The results are below.

Schema Implementation

Average AI Overview Citations (weekly)

Average Position When Cited

Full entity schema (5+ types)

14.2

1.8

Partial schema (2-4 types)

8.6

2.4

Minimal or no schema

3.1

3.7

The gap is significant. And the fix is entirely within your control. This is exactly the kind of task Lua schedules and, in many cases, executes automatically as part of a brand's 13-layer optimisation programme.

A counterargument worth taking seriously

Some in the GEO space argue that schema is a short-term signal and that AI models will eventually move past structured markup as their reasoning improves. That's a reasonable position. But "eventually" doesn't help you this quarter. Right now, schema works. Implement it, track the impact, and revisit when the signals change. That's how you operate in a fast-moving channel.

What the Top-Performing Brands Are Actually Doing

We track visibility scores across our entire user base and we look at what separates the top quartile from the rest. Three behaviours show up consistently.

1. They treat AI search as its own channel, not a subset of SEO

The brands climbing fastest have stopped trying to retrofit their existing SEO programme to serve AI visibility. They've accepted that ChatGPT and Perplexity have different source selection logic than Google's traditional index, and they're building content architectures specifically designed for extraction and citation. This means shorter answer units, clearly attributed claims, and entity-dense writing that gives AI models something concrete to pull from.

2. They are consistent, not sporadic

AI visibility compounds. Brands that publish one strong piece of content every week, optimised for extraction, consistently outperform brands that publish ten pieces quarterly. The cadence matters because AI models update their training and retrieval indices on rolling cycles. Regular, quality output keeps you visible across those cycles. Inconsistent publishing creates gaps that competitors fill.

3. They track across multiple models, not just one

ChatGPT gets most of the attention, but Perplexity is growing fast in professional and research contexts, and Claude is gaining ground in enterprise settings. Brands that only monitor one model are missing visibility they have, and blind spots they don't. The most effective marketing teams in our cohort track all four major platforms weekly, using that data to adjust their content priorities in real time.

What this looks like in practice

One brand in our cohort, a B2B software company with around 60 employees, went from zero ChatGPT citations to appearing on the first page for seven core queries in 38 days. They weren't doing anything exotic. They followed Lua's execution calendar: implemented schema, restructured three pillar pages with cleaner answer formatting, published four new FAQ-style articles targeting extraction queries, and updated their About and product pages with consistent entity signals. Methodical, not magical.

What to Watch Over the Next 30 Days

OpenAI's search product is maturing fast

ChatGPT's search functionality has been improving steadily and we expect another significant update to its citation sourcing logic before the end of May. Based on what we're seeing in early signals, freshness and recency are going to weight more heavily in source selection. If you haven't built a regular publishing cadence yet, now is the time to start. Brands that already have one will benefit immediately when the update rolls out. Brands that don't will fall further behind. OpenAI's blog is the best place to track official product announcements.

The GEO space is getting crowded, but execution is still rare

More tools are entering the AI visibility space, and most of them are doing the same thing: they run an audit, surface a list of issues, and stop there. Diagnosis without execution isn't a programme. The brands pulling ahead aren't the ones with the longest audit reports. They're the ones actually implementing changes, week by week, and watching their citation rates climb as a result.

If you want to go deeper on any of the signals covered this week, our full tracking data is available inside the Lua platform. You can also review Google's structured data documentation for implementation guidance, and Schema.org for the full entity vocabulary. For a broader view on how AI models select sources, recent retrieval-augmented generation research from the academic community gives useful context on the underlying mechanics.

Next edition drops Thursday. We'll be covering what's happening with Claude's citation behaviour in professional services queries and whether the spike we're seeing is structural or a temporary data artefact.

Cited is published weekly by the team at Lua Rank. We track AI visibility signals so you don't have to start from zero every time something changes.

Related articles