AI search is recreating the worst parts of old SEO because brands are already publishing self-serving comparison pages, engineered citation bait, and low-trust content designed to influence models instead of helping users.
That shift is happening faster than most operators expected.
For the past year, the optimistic version of GEO said AI search would reward clearer writing, stronger evidence, and better structured content. Part of that is true. But the darker side is already visible. As soon as AI engines started influencing discovery at scale, marketers imported the oldest SEO instinct on the internet: if a system controls traffic, people will try to game it.
This week gave us unusually clear proof. The Verge reported on brands building comparison pages engineered to win citations inside Google AI Mode and other AI answer environments. At the same time, Zero Click SF ran as a standalone event around ChatGPT, Claude, Gemini, and Perplexity driven discovery, which is a strong market signal that zero-click visibility is no longer a niche theory. Practical Ecommerce also highlighted Durable’s new discoverability feature for getting found on ChatGPT, Gemini, Grok, and Perplexity. Separate vendors like Profound keep pushing measurement, crawl tracking, and “Zero Click 2026” positioning. The category is professionalizing quickly.
That is the good news and the problem.
Once a channel becomes measurable, budget follows. Once budget follows, manipulation follows.
AI Search Did Not Kill SEO Behavior, It Compressed It
A lot of commentary still frames GEO as a clean break from SEO. That is too neat.
The better way to think about it is this: AI search changed the interface, but it did not change human incentives. Brands still want distribution. Agencies still want retained revenue. Publishers still want traffic. Software companies still want to become the default recommendation in a high-intent category.
The difference is speed.
Classic SEO had friction. You published a page, waited for crawling, watched rankings move, built links, tested titles, and learned slowly. AI search compresses that loop because the output is narrower and the reward is more obvious. If a model answers a category question with three brands, everyone else disappears. That makes the incentive to manipulate much stronger.
In SEO, being rank number seven still gave you a chance. In AI search, being recommendation number four often means you do not exist.
That is why the current wave feels more aggressive than the early days of search spam. The shelf is smaller.
The Verge Story Matters More Than It Looks
The Verge’s reporting matters because it showed the exact pattern many operators suspected was coming: brands creating self-serving comparison assets tailored for AI citation systems.
This is not new in spirit. Old SEO was full of “best X software” pages written by the companies selling X software. Affiliate sites built fake review ecosystems. Agencies built city pages for locations they barely served. Publishers mass-produced thin content because ranking for long-tail queries was profitable.
AI search creates a newer version of the same playbook:
- publish comparison pages that look editorial but are commercially biased
- write answer-first sections optimized for extraction
- add citation-friendly formatting so models can reuse fragments
- repeat the same positioning across multiple pages and domains
- hope the model mistakes repetition for authority
That is not a hypothetical framework. It is already how the market is moving.
The reason is structural. Models prefer concise, extractable, apparently authoritative text. That means the easiest content to cite is often not the best content. It is the most citation-shaped content.
This is also why what content gets cited by AI is not just an editorial question. It is now a competitive vulnerability. Once people learn the format that wins extraction, they will mass produce it.
Zero-Click Became a Category, and That Changes Behavior
Zero Click SF is not important because of the event itself. It is important because it shows the market now sees zero-click and AI visibility as a budget line, not a side conversation.
That distinction matters.
When a problem becomes a budget line, companies do three things fast:
- they assign ownership
- they buy tooling
- they demand outcomes
That is the moment a discovery channel starts attracting systematic manipulation.
SEO went through the same cycle. At first it was experimentation. Then it became a department. Then it became a vendor category. Then it became an arbitrage machine. AI visibility is on the same path, just faster because the software, playbooks, and capital now already exist.
The new GEO stack is forming in public. We wrote about that earlier in GEO tools becoming their own category. What that piece implied, and what the last 24 hours make more explicit, is that observability tools will not only help brands improve. They will also help brands reverse engineer what works and scale low-trust tactics faster.
Measurement is necessary. It is not morally neutral.
Why AI Engines Are Especially Vulnerable to Trust Theater
Classic search at least separated ranking from content rendering. AI engines collapse retrieval, synthesis, and recommendation into one surface. That creates a bigger trust problem.
Here is why.
1. The answer looks finished
A SERP looks contestable. Ten links signal that the user should compare. An AI answer looks settled. Even when it cites sources, the experience implies that the machine has already done the comparison work.
That makes low-trust source selection more dangerous. If a model cites a biased comparison page as if it were neutral analysis, the user may never inspect the source closely.
2. Repetition can masquerade as authority
Language models are vulnerable to repeated framing across the web. If multiple pages describe a brand in similar language, the model may internalize that description as a stable fact pattern. That creates an opening for coordinated positioning campaigns.
This is one reason brand mentions are increasingly separate from backlinks. As we argued in brand mentions vs backlinks, entity recognition can be shaped by repeated association, not just linked authority.
3. Citation is not the same as validation
Users see a citation and assume verification. Those are not the same thing.
A model can cite a page because it is structurally easy to use, not because it is methodologically strong. Citation frequency can become a proxy for convenience rather than trust.
4. Commercial prompts are naturally compressive
The highest-value AI prompts are things like:
- best CRM for small business
- top AI visibility tools
- best hotels in Rome for families
- which accounting software should a startup use
These prompts naturally compress choice into a handful of names. That means being included matters more, and exclusion hurts more. High compression always increases manipulation pressure.
The New Spam Will Not Look Like Spam
This is where a lot of smart people are being naive.
They expect AI search abuse to look like old search abuse: obvious keyword stuffing, doorway pages, spun content, and junk backlinks. Some of that will happen. But the more effective manipulation will look polished.
The new spam will often look like:
- elegant comparison pages with a built-in commercial bias
- polished thought leadership built from consensus paraphrasing
- synthetic first-party studies with weak methodology but strong formatting
- vendor pages dressed up as neutral advice
- coordinated PR and content campaigns designed to create entity certainty
- prompt-targeted FAQ pages that answer exactly what models like to quote
In other words, AI search spam will often resemble good B2B content.
That is what makes it dangerous.
Why This Problem Moves Faster Than Old SEO Did
Three reasons.
Existing operators already know the game
Marketers, agencies, affiliate publishers, PR teams, and SaaS growth teams do not need to learn demand capture from scratch. They only need to adapt old tactics to a new interface.
The incentives are cleaner
If ChatGPT, Gemini, or Perplexity recommends three brands, the winner is visible instantly. The output is simpler than a ranking spread across dozens of positions.
Tooling is arriving earlier
In the SEO era, instrumentation took time to mature. In AI visibility, observability products are showing up almost immediately. That reduces uncertainty and speeds up adversarial behavior.
This is why SEO dashboards are blind to AI search demand was only the beginning of the problem. The missing dashboard layer creates operational blindness for honest teams, but it also creates a roadmap for aggressive teams once tooling matures.
What Brands Should Do Instead of Joining the Citation Arms Race
Some manipulation pressure is inevitable. That does not mean the right response is to copy low-trust tactics.
The better play is to build source quality that survives scrutiny.
Publish answer-first, but show your work
Yes, AI engines prefer direct answers. Give them that. But pair direct answers with evidence, explicit methodology, and clear sourcing. Do not stop at the extractable sentence.
Create comparison content with transparent bias
If you are writing a comparison page and you sell one of the products, say that clearly. Hidden commercial framing is the tactic users resent most, and AI search will amplify that resentment once people realize how often recommendation layers depend on vendor-authored content.
Invest in original data, not just formatted opinions
The easiest content to synthesize is generic advice. The hardest content to replace is evidence no one else has. Original research, proprietary benchmarks, customer datasets, and transparent scoring frameworks give models a stronger reason to cite you for substance instead of structure alone.
Strengthen entity clarity across independent domains
You do want repeated brand understanding across the web. But the durable version comes from independent corroboration, not synthetic repetition. Real mentions, real customer narratives, real product descriptions, real use cases.
Monitor how AI describes you
It is not enough to know whether you are mentioned. You need to know whether you are framed accurately, whether competitors are being preferred, and whether weak sources are shaping the narrative around your brand. That is exactly the measurement problem searchless.ai is built to solve.
What AI Engines Themselves Need to Fix
The burden is not only on publishers.
If AI search companies want trust, they need stronger defenses against citation theater.
Here are the minimum fixes that matter:
Weight source independence more heavily
Ten pages controlled by one vendor should not look like ten independent confirmations.
Distinguish expert synthesis from vendor self-description
A company’s own comparison page can be useful, but it should not be treated as editorially equivalent to independent analysis.
Surface uncertainty more honestly
When evidence conflicts or source diversity is weak, the interface should say so. A polished answer with hidden source weakness is worse than a messy SERP with visible disagreement.
Reward methodology, not just formatting
If citation systems overweight clear formatting and underweight methodological rigor, bad actors will optimize the gap immediately.
This is not theoretical. SEO history already ran this experiment for twenty years.
The Contrarian View: GEO Is Real, but Easy Wins Will Rot the Channel
The anti-hype position is not “AI search does not matter.” That would be lazy.
The real contrarian view is harder: AI search matters a lot, but the easier it becomes to manipulate recommendation layers, the more fragile those layers become as sources of trust.
That is the paradox the market is entering.
GEO is a real operating discipline. Brands do need answer-first content, stronger entity clarity, structured data, source visibility, and cross-engine monitoring. Searchless.ai is right to treat AI visibility as a measurable system because visibility inside AI answers now changes real business outcomes.
But the market should stop pretending that better machine readability automatically creates a better information environment. Sometimes it does. Sometimes it just makes manipulation more efficient.
The winners over the next 12 months will not just be the brands that optimize for citation. They will be the brands that can still look credible after the market gets flooded with citation-shaped content.
That is a different standard.
The Practical Standard for 2026
If you are a founder, CMO, or content lead, use this filter before publishing anything meant to influence AI search:
- would this still feel trustworthy if a user clicked the source?
- are the claims supported by evidence, not just phrasing?
- is the commercial bias obvious?
- would an independent reviewer agree with the framing?
- are we helping the model understand reality, or just feeding it a script?
If the answer to the last question is the second one, you are not doing GEO. You are doing dressed-up spam.
The market will reward that for a while. Then it will punish it.
That is what old SEO taught us, and AI search is about to relearn it much faster.
Frequently Asked Questions
Is AI search really becoming vulnerable to spam already?
Yes. Recent reporting from The Verge shows brands engineering self-serving comparison pages for AI citation environments, and the fast rise of dedicated AI visibility tooling makes adversarial optimization more likely, not less.
Is GEO just old SEO with a new label?
No, but it inherits many of the same incentives. GEO focuses on mention rate, citation share, entity clarity, and recommendation presence across AI systems. The tactics and metrics are different, but the pressure to manipulate distribution is familiar.
What kind of content is most at risk of becoming AI citation bait?
Comparison pages, best-tool roundups, FAQ hubs, category pages, and synthetic research pieces are the most exposed because they match the structure AI engines like to summarize and reuse.
How should brands compete without becoming spammy?
Publish answer-first content with transparent sourcing, show methodology, create clearly labeled comparisons, invest in original data, and track how AI engines frame your brand across prompts and platforms.
Why does this matter for revenue, not just content quality?
Because AI interfaces compress choice. If a model recommends three brands in a high-intent category and you are missing, you lose qualified demand before the user ever visits a results page.
Free AI Visibility Score in 60 seconds -> audit.searchless.ai