Your Position Home AI Technology

Don’t let DeepSeek become the “white glove” of rumor-mongers

Wen| Yidian Financial Editorial Department, author| Zhao Tong, Editor| Zou Jun

In February this year, an ordinary investor saw an AI Q & A screenshot on the Snowball Forum: A company has invested in AI giant DeepSeek, and its share price is about to skyrocket!& rdquo;

For investors who are struggling in the stock market and eager to seize every investment opportunity, this is undoubtedly a very tempting news. He firmly believed in the content in the screenshot and excitedly followed suit. However, the next day, he found that the company had refuted the rumors and the stock price fell, and he suffered a heavy blow.

This kind of thing is not an isolated case. From a company investing in DeepSeek to the Liangshan landslide, false information generated by AI is spreading at a viral rate. Behind this there is an AI rumor pipeline controlled by Heishi Industries.

In the stock market, there is a type of rumor-monger called black mouth. They publish false information to lure investors to take the bait, and harvest institutions or individuals through reverse operations after raising powder and recommending stocks.

Nowadays, black mouths in many fields regard AI tools such as DeepSeek and Doubao as white gloves. They use the shortcomings of AI technology to create rumors and package them into authoritative answers, and then use algorithms to feed back to form a closed loop, ultimately harvesting traffic and benefits.

The first batch of people to use DeepSeek to dig gold have already stumbled on this.

AI has become a substitute for rumors”

Behind a lot of false information, rumors are organized and planned AI rumors.

Previously, in Q & A with AI tools such as DeepSeek, Doubao, Wenxinyiyan, and Kimi, many companies such as Cixing, Tiancheng, Concurrent Technology, and Chengmai Technology were described as investors in DeepSeek, but in fact none of these companies participated in the investment.

Why is there a deviation from the facts? This is directly related to data feeding.

Rumors hidden behind the Internet will use AI to produce rumors in batches. For example, Cixing invested in DeepSeek, etc., which can be called a lie printer on the assembly line. Moreover, the efficiency of rumor-mongers is very high. Some people can create thousands of fake articles a day, and there is even a fake software that generates 190,000 fake articles a day.

Then, rumor-mongers will control hundreds of water army accounts to spread the rumor-mongering information at high frequency on multiple online platforms. Their ultimate goal is to let AI invoke a large amount of false information and act as its own mouthpiece.

Therefore, many people will see AI tools citing false sources and giving wrong answers. Originally, some people were skeptical of rumors, but after seeing the answer given by AI, they firmly believed in them. At this time, they fell into the trap created by the rumor-mongers and fell into the trap. Some people mistakenly thought that they had discovered the wealth code because they saw information such as the potential of a certain investment product in the AI answer, and were cut off as a result.

The most terrifying thing is that rumor-mongers will continue to spread the answers given by AI in the form of screenshots to induce and deceive more people. So these AI rumors are not spread once, but a cycle in which AI answers more rumors. This self-reinforcing closed loop allows rumors to proliferate endlessly like cancer cells.

According to incomplete statistics from the Nandu Big Data Research Institute, more than 1/5 of the 50 domestic AI risk-related public opinion cases with high search popularity in 2024 were related to AI rumors, and 68% of netizens had mistakenly believed rumors because experts generated by AI interpreted authoritative data.

One interviewee smiled bitterly: I didn’t believe in gossip before, but now even AI lies. Who else can we believe?” rdquo;

The destructive power caused by AI rumors is huge and is not limited to the capital market.

Not long ago, rumors that a Guangzhou court had issued its first verdict on a L3 autonomous driving rear-end collision accident of a certain automobile brand spread across the Internet, which dealt a blow to the brand’s reputation and sales and harmed corporate interests.

In the event of a public security incident, someone deliberately created AI rumors to disrupt the public. This will not only interfere with the rescue rhythm, but also easily cause public panic. When rumor-mongers harvest traffic, the price paid by society is actually the collapse of trust and chaos of order.

The harm caused by AI rumors is still global. The “2025 Global Risk Report” released by the World Economic Forum shows that errors and false information are one of the five major risks facing the world in 2025, and the abuse of AI is an important driver of this risk.

So, how did AI become a substitute for rumors?

How does AI become a substitute for rumors?

Although AI is now very popular and updates are very fast, there are still many shortcomings.

Among them, the more prominent problems are corpus pollution and AI illusion.

The training of large AI models relies on massive amounts of data, but no one guarantees the authenticity of the data. China Academy of Information and Technology has conducted experiments. When more than 100 pieces of false information are continuously published in specific forums, the confidence of the mainstream AI model in answering benchmarking questions will soar rapidly from more than 10%.

Not long ago, a research team at New York University published a study that revealed the vulnerability of Big Language Models (LLMs) in data training. They found that even a very small amount of false information, accounting for only 0.001% of the training data, could cause major errors in the entire model, a process that was extremely inexpensive, costing only $5.

This is like injecting a few drops of poison into a reservoir, making every drop of water in the reservoir smell like a lie, and the information system will be destroyed. It can be called spiritual poisoning that pollutes AI.

This actually exposes AI’s fatal flaw: it is difficult to distinguish between popular posts and real information, and only recognizes the weight of data. It is like an honest mirror, but it reflects a tampered world.

In order to achieve logical self-consistency, some AI even make up nonsense.

An AI tool based on false corpus with a mortality rate of 5.2% in the 1980s, output the conclusion that 1 person died in every 20 people born in the 1980s. This kind of serious nonsense stems from the AI big language model fabricating information that it believes is real or even seems reasonable. It pursues logical self-consistency rather than factual correctness, which is also called the AI illusion.

It seems that AI is better than humans when it comes to making a picture at the beginning and making the rest.

Whether technology is guilty in itself is a controversial topic, but human greed must be the main culprit of AI rumors.

Traditional rumor-making requires the hiring of writers, while AI compresses costs to almost zero and is extremely efficient, with extremely rich benefits. In 2024, Nanchang police investigated and dealt with a certain MCN institution. The person in charge Wang Moumou used AI tools to generate 4,000 – 7,000 false articles every day, covering a company’s thunder and disaster situation in a certain place. At its peak, 4,000 – 7,000 articles can be generated every day, and the daily income exceeds 10,000 yuan.

A black industry practitioner claimed that using AI to spread rumors is like opening a money printing machine. The team of three people can earn 500,000 yuan a month.” rdquo; What’s even more ironic is that they have even developed a rumor KPI system: each piece of fake news rewards the rumor-mongers based on the amount of circulation, forming an incentive mechanism for the more work, the more you get.

With the trend of interests and the support of AI, rumors seem to have evolved from workshop-style small-scale fiddling to industrial production.

Although the “Regulations on the Management of Deep Integration of Internet Information Services” require AI content to be marked, some AI tools and platforms are still lacking in this regard. When some rumor-making gangs publish false information generated by AI, a certain platform only pops up a prompt asking you to abide by laws and regulations, and it can still be released normally after clicking confirm.

As more and more people are involved in this whirlpool of false information formed by AI rumors, simply condemning technology is no longer helpful. Only a three-pronged approach of technical defense, platform responsibility and legal sanctions can we cut off this assembly line of lies.

How to fight truth and rumors?

First of all, data source references and AI detection must be paid attention to.

To reduce the probability of rumors occurring, AI tools must strictly detect the source and authenticity of the data. It is reported that the data source of bean buns mainly relies on its own business data, accounting for 50%-60%; externally acquired data accounts for 15%-20%. Due to quality uncertainty, bean buns are more cautious when feeding synthetic data.

In addition, Doubao also publicly emphasizes not using any other model data, which is also to ensure the independence, reliability and controllability of the data source.

Using magic to defeat magic, that is, using AI to detect content generated by AI, is also an effective way to control rumors.

There are many teams at home and abroad that are investing in developing AI-generated content detection technology. For example, Tencent’s mixed-source security team Suzaku Laboratory has developed an AI-generated image detection system, which uses AI models to capture various differences between real pictures and AI-generated pictures. The final test detection rate reaches more than 95%.

Foreign Meta has created a system that can embed hidden signals called watermarks in AI-generated audio clips, which helps detect AI-generated content on the Internet.

In the future, AI tools such as DeepSeek, Doubao, Wenxinyiyan, and Kimi will still need to use AI technologies such as Natural Language Processing (NLP) technology to analyze the semantics and logical structure of data, identify contradictions and unreasonable expressions in text, and try to avoid the influx of false information into data feeding.

Secondly, as an important channel for information dissemination, content platforms must shoulder the responsibility of information gatekeepers.

Platforms such as Douyin, Weibo, Quick Hand, and Xiaohongshu have begun to force the addition of AI-generated watermarks for this content, retaining the logo when forwarding it. Today’s Headline focuses on building three aspects of capabilities in rumor management, including a rumor database, an authoritative source database and a professional review team.

In addition, our users themselves must also learn to identify false information and strengthen their awareness of prevention.

For the answers given by the AI, we should not accept them all, but ask for specific details to make the AI’s answers more credible and judge whether the answers are hallucinations. For example, when the AI claims that a stock will skyrocket, we should further ask the data. What are the sources?

In addition, cross-verifying information is also an effective method, that is, verifying the accuracy of the answer through multiple channels. Previously, AI rumors about earthquake warning in a certain place caused panic, but some netizens quickly saw through false information by comparing data from the official websites of the Meteorological Bureau and the Earthquake Station. zwnj;

Finally, relevant laws must also keep up.

The “Interim Measures for the Management of Generative Artificial Intelligence Services” have required the legalization of data sources and clarified the red line that no false and harmful information should be generated. However, there are still gaps in the current laws and regulations on AI feeding and need further optimization. Specifically, the law needs to control relevant aspects such as how the feeder creates the authenticity of the corpus and the purpose of the feeding.

conclusion

For the public, AI should be the guardian of truth, not the loudspeaker of lies. When technology becomes an accomplice to greed, what we need is not only smarter AI, but clearer humanity.

From the purification of corpus to the simultaneous rectification of platforms and laws, this AI anti-counterfeiting campaign must be won. AI tools, content platforms and regulators must work together to build a co-governance firewall to trap rumors in cages.

In this way, AI can truly become a torch that illuminates the truth, rather than a white glove for rumor-mongers.

Popular Articles