Artificial intelligence was used to generate the video of "Shanghai Zhongshan Park subway station stabbing"! How to control AI fraud?
On June 20, the Shanghai police reported that two brand marketing personnel fabricated false information such as "stabbing at Zhongshan Park subway station" in order to attract attention, and the relevant personnel have been administratively detained by the police. In the report, there is a detail that attracts attention: a counterfeiter used AI software to generate video technology and fabricated false information such as a fake video of a subway attack.
This is not the first time that artificial intelligence technology has appeared in rumors. For some time, in the rumor-mongering incidents reported by the police in many regions, there have been words such as "AI software", "large models" and "automatic production". The content of these rumors is different, but the essence is the same: the rumor makers use artificial intelligence technology to fabricate rumors, and they are in various forms such as text, pictures, and videos.
What’s more serious is that some AI software has an alarming “rumor-mongering efficiency.” For example, there is a fake software that can generate 190,000 articles a day.
According to the Xi'an police who seized the software, users buy a large number of real news articles from online platforms that specialize in selling news articles to form a manuscript pool; then use the fake software to input their own needs and the software automatically generates manuscripts. The software can piece together real news articles, reverse the paragraphs, and replace synonyms to form articles that reflect the same event but of different types.
The number of articles automatically generated by this software every day is quite astonishing, about 190,000; the police also extracted the articles saved by the software for 7 days and found that the total number exceeded 1 million, covering current news, social hot spots, social life, etc. Neither the software developer nor the user verifies nor has the ability to verify the authenticity of the relevant articles.
So, what is the use of so much fake news? It turns out that there are special "profit-making accounts". The users of the relevant accounts organized to publish these articles to relevant platforms, and then use the platform's traffic reward system to make profits. At present, the account involved in the case has been sealed by the platform, and the relevant software and servers have also been shut down. The police are still digging deep into the case.
The above cases show that artificial intelligence technology is undoubtedly a "double-edged sword". While it helps all walks of life, it also brings about the side effect of mass counterfeiting, and makes the contradiction of "it only takes a few words to spread rumors, but it takes a lot of running around to refute them" increasingly prominent. There is even a new dilemma of "it's too late to refute rumors even if you run yourself to death".
In this case, only source control can reduce the number of rumors. From a practical point of view, in addition to strengthening platform governance, the principle of "technology for good" in artificial intelligence should also be strengthened, and the prevention and control of rumors should be taken into consideration in the development and application of various models and software. At the same time, it is necessary to speed up the development of "technology defeating technology" and introduce artificial intelligence rumor refutation into rumor governance.
On the one hand, platforms should strengthen their censorship systems. At present, many platforms have clearly required users to mark AI-generated content, and to clearly add fictional labels to content that contains fictional or deductive elements. For accounts that violate the rules, measures such as "blocking" will be taken.
However, it should be noted that although "blocking accounts" is quick and direct, it is already a post-event management method for the spread of rumors. If we want to cut off the source of rumor spread, it is necessary to introduce more effective pre-event and in-event preventive measures. For example, account registration verification, dynamic inspections, and adjustments to the traffic sharing system will help prevent problems before they occur.
On the other hand, the rumor management responsibilities of technology developers and users should be clarified.
As for those fake software or applications, they are based on a large number of real news articles and video footage, which makes AI-generated rumors difficult to detect and verify. Therefore, some big model developers said that they would watermark the content generated by the big model through background settings to inform users.
But in reality, some small software developers, as well as those who specialize in developing "AI fraud" software, not only do not set labels, but also try every means to remove labels and circumvent supervision. This means that the problem of AI fraud cannot be solved by the self-discipline of models or software developers alone. At this time, it is necessary to clarify the responsibilities of technology users and software developers. Those who condone, allow or even deliberately write software to spread rumors will be severely punished.
Finally, it is advisable to introduce artificial intelligence technology in rumor governance to "defeat technology with technology."
Take AI-generated fake news as an example. The style of this type of fake news is very close to real news. It is difficult to find loopholes and refute rumors by manual identification alone. However, existing artificial intelligence technology has the ability to "see thousands of lines at a glance" and "see tens of thousands of lines at a glance" to find traces of fake articles from a large amount of real news. If there is a corresponding model, it will undoubtedly fill the gaps in manual verification in a timely manner and quickly and effectively identify AI fraud.
With the popularization of AI generation technology, the threshold for information forgery is getting lower and lower, and more and more information needs to be reviewed. At this time, there is an urgent need to use "technology for good" to encourage developers to strengthen cooperation in technology research and development, industrial application, system design and other aspects to form a combination of AI rumor governance.
![Artificial intelligence was used to generate the video of "Shanghai Zhongshan Park subway station stabbing"! How to control AI fraud?](https://a5qu.com/upload/images/83161483cce9813481ced6bf682e4631.webp)
![](https://a5qu.com/s/user/default.webp)