Will it be replaced or a perfect future? What is the future direction of human-machine coexistence
Are you worried about being replaced by artificial intelligence?
Herali once said that the fear of artificial intelligence has been troubling humanity since the beginning of the computer age. And this issue seems to have come closer to everyone after the emergence of chatGPT. Large generative pre trained language models, represented by chatGPT, are considered to overcome communication barriers between humans and machines. The language barriers between different countries and the professional vocabulary barriers between different professions seem to be resolved by simply inputting natural language. The language "Tower of Babel" is being broken by technology.
In the future, what will the world be like when humans and machines coexist? Will humans be replaced by large models? How to coexist? On September 7th, some thoughts were heard at the Bund Conference.
The emergence of large models is inevitable
Technology is accelerating development, and the world is changing faster and faster. ChatGPT emerged and surpassed 100 million monthly active users in less than two months, while the previously hottest application, TikTok, achieved the same goal in nine months.
"As technology professionals, the biggest concern we have every morning when we open our eyes is whether new revolutionary technologies will emerge again? Is our job still secure?" Professor Xiao Yanghua from the School of Computer Science and Technology at Fudan University smiled and said, "In the era of technological explosion, no era has changed so quickly.".
The accelerated changes have made the world increasingly complex, leading to increasing uncertainty in world development. Looking at the world, the trend of aging is intensifying, natural disasters are frequent, the political environment is severe, economic expectations are declining, and various factors are intertwined. The risk of social development losing control is also increasing.
However, at the same time, human cognitive abilities are limited. "Due to the slow pace of biological evolution, humans today are not much smarter than ancient humans," said Xiao Yanghua. This also brings a major challenge for humans today - limited cognitive abilities are unable to cope with the increasingly complex, rapidly changing, and increasing risks of losing control of the world.
On the other hand, the big model may have already jumped out of the current human imagination framework ten years later. As the data and computing power of the training set continue to grow, theoretically speaking, the cognitive ability of machines can keep pace with the increasing complexity of the world.
Therefore, the emergence of such a large model is inevitable. "To address the numerous problems in current human society, it is necessary to develop a large model with high cognitive abilities," said Xiao Yanghua.
A perfect future?
According to a recent survey report by the Pew Center, 42% of experts are both excited and concerned about the changes in the evolution of "human+technology" that will be seen by 2035.
The big model can indeed solve some problems in life and work. Professor Wang Guoyu from the School of Philosophy at Fudan University shared during a roundtable meeting that he would also polish his English paper using chatGPT after writing it, saying that he is indeed better than himself.
On the other hand, before discussing the future, many hidden concerns brought about by artificial intelligence such as large models are already emerging. The boundaries between humans and machines are becoming increasingly blurred, and traditional Turing testing is no longer sufficient for the task of distinguishing between humans and machines in the era of large models. Simulation robots and anchors are emerging one after another. It is difficult to distinguish between true and false, and social governance issues such as false information and rampant fraud have emerged one by one. "The emergence of artificial intelligence is using the spear of technology to destroy the shield of technology, making people realize that seeing is not necessarily believing. This will reshape people's cognitive concepts," said Yuan Hui, Chairman and CEO of Xiaoi Group.
Meanwhile, large-scale generative language models such as GPT4 possess the core capabilities of human thinking in terms of form, including language comprehension, trial and error, logical reasoning, and operations planning. "It has many abilities that simulate human cognitive abilities, and it also has the ability to become an autonomous intelligent brain," said Xiao Yanghua.
Looking at the future now, is autonomous universal artificial intelligence with cognitive abilities still just a tool in the traditional sense?
Xiao Yanghua's biggest concern is the use of input and output. As the big model becomes more and more useful, people will become increasingly dependent on it. When a large number of writing tasks are handed over to machines, people are deprived of the opportunity to exercise their thinking. "The regression of intelligence brought about by this may lead to a regression of human subjectivity."
Realizing oneself
In fact, there is currently no definitive or clear answer as to whether artificial intelligence will replace humans. But at the Bund Conference, we saw another consensus on this issue, which is that the key factor affecting this answer is cognition, that is, whether humans can re understand themselves and the world.
It is difficult for one person to achieve general education, but big models can do it. Many guests on the forum have mentioned a common influence, that large models are massive knowledge containers, which can make knowledge cheap. "We need generative artificial intelligence to help humans squeeze out the water of knowledge," said Duan Weiwen, director of the Department of Philosophy of Science and Technology at the Institute of Philosophy, Chinese Academy of Social Sciences. Large models can be seen as mirrors of human intelligence.
Generative artificial intelligence indicates that we have been largely imitating knowledge and meaning. "Through this mirror that mimics intelligence, we can force ourselves to no longer pretend to know or express things we don't feel. We can let artificial intelligence complete boring tasks for you, but we shouldn't let it make decisions for you."
In Duan Weiwen's view, most language and thinking are just imitation. Before artificial intelligence began simulating things like briefings, meetings, memos, proposals, schedules and invoices, code and code reviews, the work on the screen was all about simulating them themselves. "Now it's time to leave the simulated simulations to the machines," he said
Going back to the beginning, what should we do to maintain human subjectivity? Rebuilding the human-machine relationship is the answer uploaded by the forum. In the future, humans and artificial intelligence will no longer be just users and tools, but rather a relationship of consultation, assistance, and collaboration.
In Xiao Yanghua's view, to reconstruct human-machine relationships, the first step is to reconstruct the value of both humans and machines.
In the process of innovation and creation, large models excel in routine and repetitive information processing tasks, and are able to perfectly handle the "generation" part. People should provide prompts, as well as work on evaluation, explanation, judgment, and selection. "Evaluation is more important than generation, appreciation is more important than creation; planning is more important than execution, conception is more important than writing; questioning is more important than answering, questioning is more important than following." said Xiao Yanghua.
Upon careful consideration of this statement, it is not difficult to find that the more important and irreplaceable role is always the one who holds the initiative. Therefore, in his view, in the era of human-computer symbiosis, the most important ability of humans is the ability to use and control AI. When various sectors are thinking about how to make artificial intelligence bigger and stronger, another equally important issue is how to control the capabilities of large models within a certain boundary range.
The current big model does not yet have self-awareness, but it may not necessarily have it in the future. For humanity, this is the bottom line of safety, after all, "civilization without humans is meaningless," said Xiao Yanghua.