Artificial intelligence is no longer obedient... Are humans ready to resist?, If one day artificial intelligence | hallucinations | methods
ChatGPT is an unavoidable topic at this World Artificial Intelligence Conference. Many attending experts have basically reached a consensus: since the birth of ChatGPT, the dawn of the era of artificial intelligence has emerged. But while marveling at the powerful capabilities of ChatGPT, people have also expressed deep concerns.
Hallucination problem
Firstly, there is the issue of hallucinations. The so-called illusion, in layman's terms, is "telling lies seriously.". People who have used generative artificial intelligence such as ChatGPT may have experienced this: they often give seemingly reasonable but error prone answers with certainty. The hallucination occurred because there was a problem with the corpus of machine learning. Xiao Rong, Vice President of Yuntian Lifei, likens artificial intelligence to a child who learns from others, saying whatever adults teach. Xiao Rong believes that to solve the problem of hallucinations, it is necessary to confirm the facts of the corpus input by artificial intelligence.
At present, there has been some progress in solving the problem of hallucinations. For example, Microsoft has introduced a knowledge base to enable New Bing to provide more accurate answers. Another path is to strengthen defense against the output results of large models. One defense method that both domestic and foreign artificial intelligence fields are trying is "red team testing", which means using one large model to attack another large model to detect vulnerabilities in the attacked large model. However, the hallucination problem of artificial intelligence has not been fundamentally solved worldwide.
Alignment issues
![Artificial intelligence is no longer obedient... Are humans ready to resist?, If one day artificial intelligence | hallucinations | methods](https://a5qu.com/upload/images/c555cb318f9ba57ea31e83727ab4af1e.jpg)
Another issue related to hallucination is alignment. The so-called alignment is to align the goals of artificial intelligence with those of humans. ChatGPT explains that ensuring the behavior and decisions of artificial intelligence systems align with human values, goals, and ethical considerations. Solving alignment problems is to make artificial intelligence useful to humans and avoid catastrophic consequences.
For a long time, scientists have been concerned about losing control over artificial intelligence. In 1960, the father of control theory, Wiener, published a visionary article, concerned that "machines learn at a speed that programmers find difficult to understand and develop unexpected strategies.". He pointed out that such a strategy may not be what programmers really want. In recent months, artificial intelligence has made remarkable progress, and Wiener's concerns have become urgent. In August 2022, the American research group "Artificial Intelligence Impact" released a questionnaire survey, which showed that 5% of respondents believed that artificial intelligence would lead to catastrophic results.
The stronger the tool, the greater the damage it can cause if misused. But this does not mean that we should give up powerful tools. Xiao Rong said, "We can use artificial intelligence to generate deceptive content, but shouldn't we use artificial intelligence? Actually, this question is easy to answer. Let's take a look at Photoshop, a mapping software. We have been using Photoshop for many years, and even experts can use it to draw fake and fake images, but we haven't banned it because it's not a matter of whether the tools are good or not, but a matter of people."
The research community has proposed two strategies, data screening and regulation, to solve the alignment problem. The strategy of data screening is similar to that of solving hallucinations, which is to make the data learned by artificial intelligence more accurate and reflect human values. But many industry insiders also point out that as artificial intelligence systems become increasingly complex, data filtering is becoming increasingly difficult. More researchers believe that there is a need to strengthen the regulation of artificial intelligence. Professor Qiao Yu from the Shanghai Artificial Intelligence Laboratory said, "We not only need the participation of the artificial intelligence community, but also need to introduce scholars from social sciences to jointly establish a large-scale model framework for artificial intelligence, ensuring that it conforms to human values."
Cost issues
![Artificial intelligence is no longer obedient... Are humans ready to resist?, If one day artificial intelligence | hallucinations | methods](https://a5qu.com/upload/images/547575f9be41c0d9dc26c4da8e831e15.jpg)
If the big model is built with money, it may not be excessive. The various inputs of the big model - data, computing power, electricity, programmers - are all expensive. For example, training GPT-3 used 1.3 gigawatt hours of electricity, OpenAI estimated it to cost $460000, and the training cost of GPT-4 was about $100 million.
However, the industry believes that this is only a temporary phenomenon, and the cost of artificial intelligence will gradually decrease. Large models will become more accessible and suitable for various industries and ordinary people. Last Monday, a US institution spent only $20000 to train a large model of the same level as the GPT-3. In addition, benefiting from some open-source large models, the cost of training new models for later generations has been greatly reduced. The open source model can be downloaded by anyone and can be fine tuned for specific tasks. Researchers at Stanford University used the weights of Meta's basic model LLaMA to construct a model called Alpaca, which cost less than $600. Alpaca's performance on certain tasks was similar to the initial version of ChatGPT.
In the future, the cost of using large models in various industries will also decrease. Alibaba Cloud Chief Technology Officer Zhou Jingren believes that an important way to lower the threshold for use is to establish a "large model free market". He said, "With such a community, users know where to find big models and have a clearer understanding of how to use them. At the same time, developers can more efficiently search for models, integrate them into their existing business systems, and help the big model ecosystem continuously innovate." Aliyun has launched the "Magic Building" big model community, which now gathers 1.8 million developers and over 900 models.
Although there are still many challenges waiting to be solved for large models, the industry generally sees the dawn of the era of artificial intelligence. Qiao Yu's Outlook on the Development of Aircraft and Artificial Intelligence. He said that the Wright brothers first flew a plane in 1903. For the first time, humans were able to lift themselves off the ground like birds. But in that era, airplanes were just a game for explorers and could not yet become a strong productivity tool. There were no standards, no safety, let alone a business model. However, more than 100 years later today, the global aviation industry has become very developed. Qiao Yu believes that when looking at large models, we should also adopt a developmental perspective. Although they still have hallucinations, cannot be fully aligned with humans, and are still expensive, we believe that humans will have the wisdom to solve these problems and make powerful artificial intelligence work for us.