This is not possible!, Turing Award Winner: If Artificial Intelligence is Controlled by Only a Few Engineers | AI | Artificial Intelligence
"Super artificial intelligence has more powerful abilities than humans, and it may have a positive future, but there is also a possibility of a negative future." On July 6, at the opening ceremony of the 2023 World Artificial Intelligence Conference, Tesla CEO Elon Musk first raised questions about artificial intelligence during his speech, which sparked heated discussions among many professionals and scholars in the industry.
Musk believes that a key issue for the future development of artificial intelligence is the ratio between robots and humans. There will always be a stage in the future where the number of robots will exceed that of humans, which will have both positive and negative impacts. Therefore, people need to consider how to regulate such deep artificial intelligence.
Yang Likun, the 2018 Turing Award winner and Chief Artificial Intelligence Scientist of the Meta AI Basic Artificial Intelligence Research Team, expressed complete disagreement with strict regulation of artificial intelligence. He believes that humans can use controllable methods, such as setting up safety fences, to prevent them from deceiving and dominating people's behavior. This may not be easy, but it can be achieved. "In the long run, to make artificial intelligence platforms safe and good, it is necessary to make them open source. In the future, everyone can communicate with the digital world through artificial intelligence assistants. If artificial intelligence is only controlled by a few people, this is not possible."
During the fireside dialogue, Microsoft's former executive vice president and foreign academician of the National Academy of Engineering, Shen Xiangyang, and IEEE Chairman and CEO, Saif Raman, also discussed the security issues of artificial intelligence.
Shen Xiangyang believes that the differences in AI control may stem from different perspectives. "From an academic perspective, it is important to conduct open research. As someone from the industry who develops AI products, I strongly agree that there should be some regulation and guarantees, as artificial intelligence is becoming increasingly powerful."
Saif Raman stated that industry organizations like IEEE may be an important force in preventing AI security issues. He said that currently, artificial intelligence projects require a lot of computing power, and most of them cannot be carried out in a secret state. So industry organizations like IEEE can provide a platform for scientists, engineers, and developers to discuss their work together, in order to ensure the security of AI development control.