Can we make good use of ChatGPT as a double-edged sword? Musk and thousands of others have joined forces to call for a suspension of social research | Risk | ChatGPT
Last November, ChatGPT was born, shocking the world. In the eyes of some people, generative artificial intelligence represented by this is a great technological revolution, and its significance may be no less than the emergence of personal computers. But in the eyes of others, ChatGPT is too dangerous at the current level of human cognition.
The standardization of generative artificial intelligence has quickly entered the sight of legal professionals. Yesterday, the 2023 World Artificial Intelligence Conference Rule of Law Forum was held, with the theme of "Digital Governance: Legal Response to Generative Artificial Intelligence". Experts and scholars from academia, industry, and practice at home and abroad are exploring how to regulate generative artificial intelligence.
The impact on social ethics and order cannot be underestimated
In the opinion of the experts attending the meeting, generative AI is obviously different from previous Internet infringements. It generally does not cause specific personal security problems or property losses, but under the existing model, it will have a huge impact on intellectual property protection and information screening.
"Can the literary, artistic, and other achievements generated by ChatGPT be recognized as works under copyright law? If so, there is no consensus on whether the copyright belongs to the technology developer, user, or none. Professor Wang Liming from Renmin University of China said that a more serious hidden danger may come from the large amount of false information generated by generative artificial intelligence, some of which come from program omissions, some from malicious users, and once spread, it is highly likely to cause social unrest.".
Li Chen, Vice President and Chief Compliance Officer of Ant Group, also holds a similar view. In his view, intellectual property protection and personal information security are long-term problems in the development of the Internet industry, not after the emergence of generative AI. However, false information that is difficult to distinguish, or content that contains discrimination, prejudice, and errors, may be generated and disseminated in large quantities through ChatGPT, and its impact on social ethics and order cannot be underestimated.
While believing that generative artificial intelligence will bring convenience to people's lives, Yong Wenyuan, Vice President and General Manager of the Judicial Business Department of iFlytek, pointed out that information that is difficult to distinguish between true and false can sometimes mislead the public. He used artificial intelligence services for judicial purposes as an example. Currently, artificial intelligence is able to assist ordinary people in writing legal documents and providing professional legal and regulatory advice. However, in a lawsuit in the United States, lawyers cited six precedents, which were ultimately proven to have been independently compiled by ChatGPT. Even industry insiders find it difficult to discern the clues, let alone ordinary people.
Experts attending the meeting also pointed out that the development of generative artificial intelligence is based on massive data, and cross-border data flow is inevitable in this process. Ensuring national information security is also a challenge. In addition, an increasing number of financial, legal, and government services are using generative artificial intelligence to replace traditional manual answers. However, most of these industries have high entry barriers and require obtaining relevant qualifications. Is ChatGPT considered unlicensed for employment?
Technological backwardness is the biggest risk
In fact, since its inception, there have been constant calls for legal and ethical regulation of ChatGPT. In March of this year, thousands of entrepreneurs and scientists, including Tesla founder Musk, author of "A Brief History of Humanity" Herali, Turing Award winner Joshua Bengio, and others, jointly called for a temporary suspension of at least six months of related research so that relevant regulations can keep up. In this regard, attending experts generally believe that strengthening regulation is necessary, but it is necessary to grasp the scale, especially to not excessively hinder its development.
"We need to form a value orientation that not only faces the risks of generative artificial intelligence, but also recognizes that technological backwardness is the biggest risk." In terms of specific regulation, Wang Liming suggests that, except for special circumstances such as developers intentionally infringing on citizens' personal information, in the vast majority of cases, a fault liability system should be adopted for the legal risks brought about by generative artificial intelligence, and the current principle of notification deletion should be applied.
The so-called fault liability system refers to whether the developer has fulfilled the maximum duty of care during post review, but due to current technical conditions, it is impossible to completely avoid damage. If so, it is not appropriate to assume that it is at fault. The principle of notification deletion refers to the provisions of China's Civil Code and Personal Information Protection Law, which stipulate whether network service providers have screened and screened specific information after receiving complaints from users about the possibility of infringement, and taken necessary measures such as deletion.
"From the current perspective, the infringement issues brought about by generative artificial intelligence have not completely exceeded the regulatory scope of relevant laws in China." Wang Liming believes that the urgent task is to establish a set of compliance review standards for the research of generative artificial intelligence and prepare for possible specialized legislation in the future.
According to Lu Kai, a scholar from the United States and researcher at the Yale Law School's Cai Zhongzeng China Research Center, since the Obama era, every US federal government has formulated regulations related to artificial intelligence regulation. However, these regulations are not mandatory laws, and even if they are truly violated, they may not necessarily be subject to legal sanctions. "I believe that the US government is using this as a way to release an attitude, indicating that we are paying attention to this field and encouraging everyone to explore it." Lu Kai said that compared to others, some specific precedents made by federal courts may have more reference value for regulating artificial intelligence in the United States.
Li Chen said that the current research on generative artificial intelligence has become a new track of international competition, and we cannot stay out of it. We must join in. "I think that rule competition and technology competition are equally important. We need to study and establish a full cycle compliance management and monitoring mechanism as soon as possible, and strive for greater voice in future international rule making."