What signals were released?, Seven departments working together! Implementation Measures for the First Generative AI Regulatory Document | Industry | Department
The booming generative AI industry has officially welcomed its first regulatory document.
Following the public solicitation of opinions on the Management Measures for Generative Artificial Intelligence Services by the Cyberspace Administration in April this year, on July 13, the Cyberspace Administration and seven other departments officially issued the Interim Measures for the Management of Generative Artificial Intelligence Services, which will come into effect on August 15, 2023.
The relevant person in charge of the State Internet Information Office said that the Measures were issued to promote the healthy development and standardized application of generative AI, safeguard national security and social public interests, and protect the legitimate rights and interests of citizens, legal persons and other organizations.
Generative artificial intelligence refers to the technology of generating text, images, sound, video, code, and other content based on algorithms, models, and rules. Generative artificial intelligence, represented by ChatGPT released by OpenAI, is sparking a new round of AI arms race among domestic and foreign technology giants and entrepreneurs, including Microsoft, Google, Meta, Baidu, Alibaba, and others.
The latest "Measures" have a total of 24 articles, which put forward relevant requirements in terms of algorithm design and filing, training data and models for generative artificial intelligence service providers, protection of user privacy and trade secrets, supervision and inspection, and legal responsibility. At the same time, the Measures specify the support and encouragement attitude towards the generative AI industry.
Wu Shenkuo, doctoral supervisor of the School of Law of Beijing Normal University and deputy director of the Research Center of the China Internet Association, told the First Finance and Economics reporter that the release of regulatory documents related to generative AI was fast, which reflected the synchronous promotion with the development and application of technology, and reflected the increasingly mature and agile evolution of China's network regulation and digital regulation.
Several practitioners have expressed to First Financial that the "Measures" emphasize the feasibility of implementation, reflecting the basic ideas of risk prevention, risk response, and risk management. Its implementation is of great significance for promoting industrial development and creating a good innovation ecosystem for the development of generative artificial intelligence.
What regulatory signals are being released?
Wu Shenkuo told First Financial reporters that the "Measures" introduced today are particularly prominent in three aspects: first, it emphasizes the classification and grading management mechanism of AIGC, and highlights the idea of introducing relevant regulatory mechanisms for different risk types in the future; The second is to focus on cultivating the AIGC industry ecosystem, especially regarding the sharing of computing resources and the construction of pre training public data platforms; The third is that the method emphasizes communication and cooperation both domestically and internationally. In terms of scope of application, it distinguishes between business types that provide services overseas, domestically, and those that do not provide services domestically, clarifying the scope of application of the method.
According to the comparison and analysis conducted by First Financial, compared with the previous draft for soliciting opinions, the Measures released today have added multiple new incentives for generative artificial intelligence services.
For example, in Chapter 2, Articles 5 and 6 of Technology Development and Governance, it is mentioned to encourage the innovative application of generative artificial intelligence technology in various industries and fields, generate positive, healthy, and positive high-quality content, explore and optimize application scenarios, and build an application ecosystem. In addition, we also encourage independent innovation in basic technologies such as generative artificial intelligence algorithms, frameworks, chips, and supporting software platforms, engage in international exchanges and cooperation on an equal and mutually beneficial basis, and participate in the formulation of international rules related to generative artificial intelligence.
The Measures also mention that effective measures should be taken to encourage the innovative development of generative artificial intelligence, and inclusive, prudent, and classified supervision should be implemented for generative artificial intelligence services.
In terms of computing power, which is of particular concern to the industry, the Measures also mention promoting the construction of generative artificial intelligence infrastructure and public training data resource platforms. Promote collaborative sharing of computing resources and improve the efficiency of computing resource utilization. Promote the orderly and open classification and grading of public data, and expand high-quality public training data resources. Encourage the use of secure and trustworthy chips, software, tools, computing power, and data resources.
In terms of service provider access, the Measures also mention that for foreign investment in generative artificial intelligence services, they should comply with relevant laws and administrative regulations on foreign investment.
Compared to the draft for soliciting opinions, the statement in the Measures has been adjusted to some extent regarding the authenticity and reliability of generated content.
"Serious nonsense" is a widely criticized issue in the industry for generative artificial intelligence. The draft for soliciting opinions mentions that the content generated using generative artificial intelligence should be truthful and accurate, and measures should be taken to prevent the generation of false information; The Measures have been updated to take effective measures based on the characteristics of service types to enhance the transparency of generative artificial intelligence services and improve the accuracy and reliability of generated content.
![What signals were released?, Seven departments working together! Implementation Measures for the First Generative AI Regulatory Document | Industry | Department](https://a5qu.com/upload/images/23b64890a3004dc7a9018da34eead298.jpg)
In addition, the Measures also mention that for departments such as network information, development and reform, education, science and technology, industry and information technology, public security, broadcasting and television, news and publishing, etc., they should strengthen the management of generative artificial intelligence services in accordance with their respective responsibilities in accordance with the law.
Industry insiders pointed out to reporters that the Measures reflect a certain fault-tolerant mechanism, which is more in line with reality and enhances the feasibility of implementation.
"Many application fields can tolerate imperfect big models, such as a hero in a game with a longer or shorter beard, saying a wrong sentence, and occasionally making mistakes may not be harmful; but some fields are very critical and cannot tolerate mistakes, such as news search, government websites, or medical education. These fields need to solve the problem of big model mistakes in the future," said a big model practitioner.
The content of the Measures also emphasizes the protection of minors.
Previously, the draft for soliciting opinions mentioned taking appropriate measures to prevent users from overly relying on or becoming addicted to generated content; The updated Measures are to take effective measures to prevent underage users from excessively relying on or becoming addicted to generative artificial intelligence services.
In terms of supervision over generative artificial intelligence, the Measures also mention that relevant regulatory authorities shall conduct supervision and inspection of generative artificial intelligence services based on their responsibilities. Providers shall cooperate in accordance with the law, explain the source, scale, type, annotation rules, algorithm mechanism, etc. of training data as required, and provide necessary technical and data support and assistance.
For the protection of personal privacy, trade secrets, and other related matters, there are multiple provisions in the "Measures" that involve relevant content. For example, relevant institutions and personnel participating in the security evaluation and supervision inspection of generative artificial intelligence services shall keep confidential the state secrets, trade secrets, personal privacy, and personal information that they know in the performance of their duties in accordance with the law, and shall not disclose or illegally provide them to others.
The reporter used generative artificial intelligence and regulation as keywords, and AI generated this image
How much impact does the industry have?
The first reaction of Li Shuchong, Vice President of China Electronics Cloud, upon seeing the "Measures" was, "Just right!". He has a deep understanding of the principle mentioned in Article 6, which encourages independent innovation in basic technologies such as generative artificial intelligence algorithms, frameworks, chips, and supporting software platforms. "Nowadays, Nvidia is hard to find a card, and we can no longer let big models get stuck in the reshaped computing system," he told First Financial.
In addition, he also stated that the Measures will standardize the application and scenario implementation of the "big model" industry, allowing AI technology to better serve the economy and high-quality development of the industry.
Tian Feng, President of Shangtang Technology Intelligent Industry Research Institute, commented to First Financial reporters that the Measures have global AI2.0 governance leadership, implementation feasibility, and important significance in promoting industrial development.
Chen Yunwen, CEO of Daguan Data, told a reporter from First Financial News, "The development of the industry will gradually be standardized. The introduction of this" method "has played a guiding role in generating large model services. Generative AI technology is very new and hot, and the next implementation and development will rely on this system to guide."
Another practitioner stated that while this "method" emphasizes risk prevention, it also reflects a certain degree of fault tolerance and error correction mechanism, striving to achieve a dynamic balance between standardization and development.
The topics of safety, trustworthiness, and regulation mentioned in the Measures have previously attracted the attention and discussion of many practitioners in the field of large-scale modeling.
Robin Lee, chairman of Baidu, said not long ago that only by establishing and improving laws, regulations, systems and ethics to ensure the healthy development of AI, can we create a good innovation ecosystem.
![What signals were released?, Seven departments working together! Implementation Measures for the First Generative AI Regulatory Document | Industry | Department](https://a5qu.com/upload/images/b76b4eb638a5ae37c88a4daca44fc7f9.jpg)
360 founder Zhou Hongyi mentioned the need to create a proprietary big model that is "safe, trustworthy, controllable, and easy to use". He proposed that the key to achieving "safe and controllable" models lies in adhering to the "auxiliary mode", positioning the large model as an assistant to the enterprise and employees, providing assistance as the "co pilot" role, and allowing people's will to play a crucial role in the entire decision-making loop.
Zhang Yong, Chairman and CEO of Alibaba Cloud Intelligence Group, also stated that "building a secure and trustworthy artificial intelligence" has gradually become an industry consensus, and relevant laws and regulations are being improved, cultivating a good soil and environment for the sustainable development of technology and industry. "There is a lot of uncertainty in innovation, some can be predicted in advance and prevented in advance; some problems arise in development and need to be solved while developing, using development to solve them."
Global brewing regulatory measures
Not only in China, the generative artificial intelligence big model similar to ChatGPT has sparked capital competition, and the attention of various countries to AIGC compliance is driving the introduction of corresponding regulatory measures.
Europe has always been at the forefront of artificial intelligence regulation. In May of this year, the European Parliament approved the first comprehensive artificial intelligence bill. "We hope that AI systems are accurate, reliable, safe and non discriminatory, regardless of their origin," said Ursula Vondrein, President of the European Commission, on May 19.
At the G7 Leaders Summit held in Japan in May this year, member country leaders acknowledged the need to govern artificial intelligence and immersive technology and proposed the creation of a ministerial level forum dedicated to discussing the progress of artificial intelligence by the end of this year, to discuss issues related to generative artificial intelligence, such as copyright and combating false information.
The UK competition regulatory agency also announced in May this year that it will begin reviewing the impact of artificial intelligence on consumers, businesses, and the economy, as well as whether new regulatory measures are needed.
The Irish data protection agency stated in April this year that generative artificial intelligence needs to be regulated, but regulatory agencies must figure out how to properly regulate it before hastily implementing a "truly untenable" ban.
The National Institute of Standards and Technology, a subsidiary of the US Department of Commerce, announced in June that it will establish a public working group composed of volunteer experts in generative artificial intelligence to help seize industry opportunities brought by artificial intelligence and develop guidance to address its risks. The Federal Trade Commission stated in May that the agency is committed to using existing laws to control the risks of artificial intelligence.
The Japanese privacy regulatory agency announced in June that OpenAI has been warned not to collect sensitive data without public permission. Japan is expected to introduce regulatory measures before the end of 2023, which may be closer to the attitude of the United States rather than the strict regulatory measures planned by the European Union, as Japan hopes that this technology can promote economic growth and make it a leader in advanced chips.
On June 12th, UN Secretary General Antonio Guterres supported a proposal by some AI executives to establish an AI regulatory agency like the International Atomic Energy Agency. Guterres also announced plans to launch a high-level artificial intelligence consulting agency by the end of this year, regularly reviewing AI governance arrangements and making recommendations.
Faced with the global regulatory trend of generative artificial intelligence, Wu Shenkuo told First Financial reporters that in the face of new technologies and applications, it is necessary to continuously explore a set of agile and efficient regulatory mechanisms and methods, and to respond to and dispose of various types of associated risks in a timely manner to the greatest extent possible. In addition, it is also necessary to continuously build and improve a set of economic, convenient, and feasible compliance guidelines, so that all parties have clear compliance standards and directions.
"Good ecological governance also requires all parties to gather consensus to the greatest extent possible, and to form common values and recognized codes of conduct for governance related to new technologies and applications on a larger scale.".
It can be said that every technological revolution brings enormous opportunities and risks. For generative artificial intelligence, only by establishing a real flywheel between user calls and model iterations can AI models become increasingly intelligent, and finding a balance between policy regulation and technological development also tests global regulatory agencies.