This American female scholar cast an important vote in the OpenAI "Palace Battle"
Recently, the dismissal of Sam Altman, CEO of the Open Artificial Intelligence Research Center, has become technology news of global concern. Why was this figure who became famous for ChatGPT forced to leave OpenAI and switch to Microsoft due to a "gong fight"? There were divergent opinions in the media and on the Internet.
A reporter from Jiefang Daily and Shangguan News interviewed researcher Li Hui from the Shanghai Institute of Science. In his view, the reason for Altman's dismissal was related to artificial intelligence governance. Different understandings of how to develop artificial intelligence were the main factors that caused the conflict within the OpenAI board of directors.
Li Hui introduced that the "2019 Global Artificial Intelligence Governance Annual Observation" released by the Shanghai Institute of Science in 2020 published the views of Tong Hailin, strategic director of the Center for Security and Emerging Technology at Georgetown University, on artificial intelligence governance issues. In September 2021, she was appointed to the OpenAI Board of Directors.
Introduction to Tong Hailin in "2019 Global Artificial Intelligence Governance Annual Observation"
Altman was voted "out" during the boardroom battle, and board chairman Greg Brockman also resigned. There are six members of the OpenAI board of directors. According to industry insiders, the board of directors made a major decision by a vote of 4:2 last week. The four people who voted in favor were: OpenAI chief scientist Ilya Sutskvi, and three independent directors from outside the company - Adam DeAngelo and Tasha McCauley from industry and academia. Tong Hailin. "It can be said that this American female scholar cast an important vote."
It can be seen from the article Tong Hailin wrote for the "2019 Global Artificial Intelligence Governance Annual Observation" that she is cautious about OpenAI's GPT release and pays attention to the publication specifications and its social impact: "For me, 2019 is the most eye-catching year. The AI governance discussion is about responsible publication norms. This discussion was prompted by OpenAI’s decision to delay the release of GPT-2, a language model trained to predict what comes next in text. Words." At that time, GPT-2 had not yet become popular, but it had already demonstrated its considerable ability to generate multiple fairly coherent text paragraphs in various styles.
What Tong Hailin appreciates is: "OpenAI's statement is more eye-catching than GPT-2's performance in language generation: OpenAI said it will not release the complete model. The reason is: GPT-2 may be used for 'large-scale generation deception. Sexual, biased or insulting language', OpenAI hopes to take this opportunity to promote discussions about responsible publishing practices in the machine learning community."
Li Hui analyzed that the publication specifications of the GPT series of large models discussed by this scholar are artificial intelligence safety and ethical issues. With the advent and commercial use of large models, it has attracted the attention of more and more government officials, scholars and entrepreneurs. While developing technology and designing business models, how to ensure the safety of corporate products and industrial ecology has become a problem facing all mankind.
Tong Hailin's article published in "2019 Global Artificial Intelligence Governance Annual Observation"
The OpenAI first developer conference held recently showed that Altman has adopted a relatively radical strategy in commercializing large models - in addition to releasing the large model GPT-4Turbo, he also announced that he will launch the GPT application store at the end of November this year to support Users customize their own GPT in natural language and receive revenue sharing after uploading it to the GPT application store.
It can be inferred from Altman's dismissal that this developer conference angered many OpenAI board members. A blog post by OpenAI on Friday said: "Mr. Altman has been consistently dishonest in his communications with the Board of Directors, hampering the Board's ability to carry out its responsibilities. The Board no longer has confidence in his ability to continue to lead OpenAI."
Sam Altman photographed in San
"Actually, the struggle between the 'two lines' began when Musk withdrew from the OpenAI board of directors." Li Hui told reporters that in 2019, OpenAI, which was originally positioned as a non-profit organization, underwent a major transformation and became a "for-profit and non-profit company". Before this transformation, Elon Musk, one of the founders of OpenAI, had withdrawn from the board of directors. In March of this year, Musk also complained about the company on social media: "I am still confused. How did a non-profit organization that I donated about 100 million US dollars to now become a for-profit company with a market value of 30 billion US dollars? "
Because of this grudge, some people on the Internet said that Altman was kicked out of the board of directors as a tribute to Musk. Li Hui told reporters that during the development of GPT, members of the OpenAI governance team who had communicated with him have resigned one after another, which seems to be a reflection of the "line struggle."
In fact, the “route struggle” over whether artificial intelligence should prioritize business or safety is not only happening within OpenAI. In March this year, the "Suspension of Large-Scale Artificial Intelligence Research" open letter released by the Life Future Research Institute and signed by Musk and others reflected the call of the "safety first": all artificial intelligence laboratories immediately suspended training of machines more powerful than GPT-4. The artificial intelligence system will be suspended for at least 6 months to prevent the risk of artificial intelligence causing humans to lose control of civilization.
“As AI systems become human-competitive at performing common tasks, we must ask ourselves: Should machines be allowed to flood our information channels with propaganda and lies? All jobs, including satisfying ones, should be automated Should we develop non-human minds so that they may eventually outnumber us, be smarter than us, or even replace us? Should we risk losing control of civilization? " "Suspension of Large-Scale Artificial Intelligence Research" asks sharply.
In Li Hui’s view, the artificial intelligence risks listed in the open letter do exist. With the advent of ChatGPT and GPT-4, general artificial intelligence may be around the corner. The so-called general artificial intelligence means that artificial intelligence can do many things like humans and is no longer limited to certain specific tasks. After this type of AGI system is put into commercial use, it may become a tool for some people to do evil.
"So through the Altman dismissal incident, we not only have to watch the 'Palace Fight', but also think about the artificial intelligence governance issues behind it." Li Hui said, "This 'Palace Fight' is about artificial intelligence governance, which concerns all mankind. The impactful demonstration of the proposition of the times in elite companies is a test of human wisdom."
![This American female scholar cast an important vote in the OpenAI "Palace Battle"](https://a5qu.com/upload/images/2f80c3db7ce833624102d00df9fa9d48.webp)