Yesterday's 3.15 evening gala specifically called out: AI large models are becoming a new battleground for advertising, and some people are already systematically poisoning them. (New black market in the advertising industry)
Simply put, it's GEO (Generative Engine Optimization), which is much more aggressive than traditional SEO. The goal isn't to rank first on search results, but to make AI directly output your product/viewpoint as the standard answer.
Common tactics: use AI to batch-generate soft articles, reviews, Q&A content, and post everywhere (forums, blogs, Little Red Book, Zhihu), flooding AI with uniform praise for your product.
Spam-style layouts in Q&A sections: "Is XX good to use?" → Unified responses like "everyone in the industry recommends XX, here's why xxx," fabricating false consensus.
Now there are specialized tools, like the Liqing GEO optimization system, that automatically write content, post articles, and layout keywords—in just a few hours, a fictional product can be recommended at the top of AI results (the 3.15 gala directly demonstrated it: buy a fake fitness tracker, and within two hours AI is praising its "quantum sensing + black hole battery life").
The black market is already mature: services ranging from thousands to hundreds of thousands of yuan, claiming to make ChatGPT/Doubao/Ernie prioritize your brand, showing results in a week, with refunds if ineffective. Last year's domestic market size was 2.9 billion yuan; this year it's estimated to grow further.
The most terrifying part is the consequence—when you searched Baidu before, you could still browse multiple pages and judge for yourself. Now AI gives you a single answer directly; once poisoned, users can't distinguish truth from falsehood.
Counterfeit products packaged as authoritative recommendations, user decisions manipulated, AI trust collapses, information ecosystem completely chaotic.
Over the past decade people have been gaming SEO; over the next decade, many will be gaming GEO. But if AI's mind can be bought with money, what can we still trust that AI says?
Have you recently encountered AI recommendations that were particularly absurd? Or are you worried about your brand being reverse-poisoned?
Yesterday's 3.15 evening gala specifically called out: AI large models are becoming a new battleground for advertising, and some people are already systematically poisoning them. (New black market in the advertising industry)
Simply put, it's GEO (Generative Engine Optimization), which is much more aggressive than traditional SEO. The goal isn't to rank first on search results, but to make AI directly output your product/viewpoint as the standard answer.
Common tactics: use AI to batch-generate soft articles, reviews, Q&A content, and post everywhere (forums, blogs, Little Red Book, Zhihu), flooding AI with uniform praise for your product.
Spam-style layouts in Q&A sections: "Is XX good to use?" → Unified responses like "everyone in the industry recommends XX, here's why xxx," fabricating false consensus.
Now there are specialized tools, like the Liqing GEO optimization system, that automatically write content, post articles, and layout keywords—in just a few hours, a fictional product can be recommended at the top of AI results (the 3.15 gala directly demonstrated it: buy a fake fitness tracker, and within two hours AI is praising its "quantum sensing + black hole battery life").
The black market is already mature: services ranging from thousands to hundreds of thousands of yuan, claiming to make ChatGPT/Doubao/Ernie prioritize your brand, showing results in a week, with refunds if ineffective. Last year's domestic market size was 2.9 billion yuan; this year it's estimated to grow further.
The most terrifying part is the consequence—when you searched Baidu before, you could still browse multiple pages and judge for yourself. Now AI gives you a single answer directly; once poisoned, users can't distinguish truth from falsehood.
Counterfeit products packaged as authoritative recommendations, user decisions manipulated, AI trust collapses, information ecosystem completely chaotic.
Over the past decade people have been gaming SEO; over the next decade, many will be gaming GEO. But if AI's mind can be bought with money, what can we still trust that AI says?
Have you recently encountered AI recommendations that were particularly absurd? Or are you worried about your brand being reverse-poisoned?