Stuart Russell: The large model is just an isolated piece of the AGI puzzle, and there are still puzzles not found

Source: The Paper

Reporter Wu Tianyi Intern Chen Xiaorui

"I don't think the only way to understand AI safety is to deploy hundreds of millions of copies of a system in the real world and see the results." Humans don't do that with vaccines, "We have to test a vaccine before we deploy it , because we're going to inject it into hundreds of millions of people."

“We don’t understand large language models and how they work. We need to have this understanding in order to be confident about safety. The techniques humans use to build AI systems should not rely on massive amounts of data for training and black boxes with unknown internals.”

Stuart Russell, a professor of computer science at the University of California, Berkeley, delivered a keynote speech at the 2023 Beijing Zhiyuan Conference.

"Artificial intelligence is like a huge magnet from the future pulling us forward, how to ensure that we will not be controlled by intelligence beyond human?" On June 10, Professor of Computer Science, University of California, Berkeley, Center for Human Compatible Artificial Intelligence (Center for Human-Compatible AI) director Stuart Russell (Stuart Russell) delivered a speech at the 2023 Beijing Zhiyuan Conference, saying that the design of artificial intelligence systems must follow two principles. First, the AI must act in the best interests of humanity; second, the AI itself should not know what those interests are.

In the dialogue session, Russell and Yao Qizhi, the winner of the Turing Award and academician of the Chinese Academy of Sciences, discussed the long-term future of artificial intelligence and its relationship with human beings. When decisions are made on behalf of society, everyone's interests should be duly taken into account."

"Artificial Intelligence: A Modern Approach," co-authored by Russell, has been hailed as the most popular textbook in the field of artificial intelligence, adopted by more than 1,500 universities in 135 countries. In 2022, the International Joint Conference on Artificial Intelligence (IJCAI) will award Russell the Excellence in Research Award. He has previously won the IJCAI Computer and Thought Award, thus becoming the second scientist in the world who can win two major IJCAI awards at the same time.

General Artificial Intelligence is still far away

At the Beijing conference, Russell defined general artificial intelligence (AGI) in a speech entitled "AI: Some Thoughts?" Artificial intelligence systems that exceed human capabilities in tasks. This kind of artificial intelligence system can learn and perform any task better and faster than humans, including tasks that humans cannot handle, and due to the huge advantages of machines in speed, memory, communication and bandwidth, in the future, general artificial intelligence will be used in almost all fields will far exceed human capabilities.

So, how far is human beings from general artificial intelligence? Russell said that we are still far away from general artificial intelligence. "In fact, there are still many significant unanswered questions."

Russell pointed out in his speech that ChatGPT and GPT-4 do not understand the world, nor are they "answering" questions, "If general artificial intelligence is a complete puzzle, the large language model is only one piece, but we do not Really understanding how to connect that with other pieces of the puzzle to really achieve general artificial intelligence,” he said, “I believe there are even pieces of the missing puzzle that haven’t been found.”

According to Russell, a fundamental weakness of current AI systems is that they use circuits to generate output. "We're trying to get highly intelligent behavior out of circuits, which themselves are a fairly limited form of computation." The inability of circuits to accurately express and understand some fundamental concepts, he argues, means that these systems require a lot of training data to learn some functions that can be defined with simple programs. He believes that the future development direction of artificial intelligence should be to use technology based on explicit knowledge representation.

"Technical issues aside, if we do create a general artificial intelligence. What will happen next?" Russell quoted Alan Turing, the founder of modern computer science, as saying, "Once the machine thinking method starts, it won't take long for it to It seems dreadful that it would overtake our feeble powers."

“How do we permanently ensure that we are not controlled by artificial intelligence? This is the problem we face-if we can’t find the answer to this question, then I think there is no choice but to stop developing artificial general intelligence.” Russell said.

In March of this year, Russell signed an open letter with thousands of people including Tesla CEO Elon Musk and "AI Godfather" Geoffrey Hinton, calling for the suspension of training AI systems that are more powerful than GPT-4 At least six months.

Russell emphasized at the meeting that the answer to the problem exists. The design of an AI system must follow two principles. First, the AI must act in the best interests of humanity; second, the AI itself should not know what those interests are. As such, they are uncertain about human preferences and the future, and this uncertainty gives humans control.

Russell said that people need to change their minds, not to pursue "machines must be intelligent", but to focus on the "beneficiality" of machines so that they are in line with the fundamental interests of mankind. "Aliens are very intelligent, but we don't necessarily want them to come to Earth."

AI and Mencius' fraternity thoughts

In the dialogue session, Russell and Yao Qizhi had a deep and insightful discussion.

When asked by Yao Qizhi about the long-term future of the relationship between artificial intelligence and human beings, Russell said that the phenomenon that human beings use AI to satisfy their own interests, which makes AI uncontrollable, stems from utilitarianism. "Utilitarianism is an important step in human progress, but it is also This leads to some problems." For example, how are decisions made when a decision affects how many people physically exist? Do people want a large group that is not very happy or a small group that is very happy? "We don't have good answers to these kinds of questions, but we need to answer these central questions of moral philosophy, because AI systems are going to have great power, and we better make sure they use that power in the right way."

Russell and Yao Qizhi (right), winner of the Turing Award and academician of the Chinese Academy of Sciences, discussed the long-term future of artificial intelligence and its relationship with human beings.

Russell quoted the ancient Chinese philosopher Mencius in his answer, "Mencius talked about the concept of fraternity in China, which means that when making moral decisions, everyone's interests should be taken into account. And everyone's preferences should be be treated equally.” He believes there is an approach, based on complex forms of preference utilitarianism, that could allow AI systems to reasonably take everyone’s interests into account when making decisions on behalf of society.

Russell asked, when there is one person and many machines, how do you ensure that these machines cooperate with each other to help humans? This touches upon fundamental questions of moral philosophy when many people have many machines. He believes that AI systems should be designed to work on behalf of all humans. "If you want an AI system to conform to the wishes of an individual, then you have to demonstrate that the AI system is limited in its scope of action to those of the individual's concerns, that it cannot harm other individuals by pursuing its own interests because it does not care about other individuals. So I think the default should be that AI systems work on behalf of humans."

In addition, in the discussion, Russell mentioned the economic term "positional goods", "positional goods" refers to the fact that people value not the items themselves, but their implicit meaning of being superior to others. "Why is the Nobel Prize valuable? Because no one else has it, and it proves that you are smarter than almost everyone in the world," he said.

"The nature of positional goods is that there is in a sense a zero-sum game. Simply put, not everyone gets into the top 1 percent. So if you gain personal value, pride from being the 1 percent We can’t give that pride and self-esteem to everyone,” Russell said.

So, should AI systems take positional goods into account when making decisions on behalf of society? "If we say we shouldn't, that's going to create a huge change in the way society works. It's a much more difficult problem. I think a lot of the internal friction in society is actually caused by these positional goods that simply can't be Achieved by everyone."

DANGEROUS LIKE A SOCIAL MEDIA ALGORITHM

Yao Qizhi asked whether it would be possible in the future to develop a "white list" that would allow AI systems to be used to do things that benefit human well-being, such as using AI methods to design drugs and solve cancer problems.

Russell said that K. Eric Drexler, one of the founders of nanotechnology, has been working on AI safety for the past few years, and he proposed comprehensive AI services (Comprehensive AI services). , CAIS), that is, artificial intelligence systems are not built for general goals, but to solve specific, narrow problems, such as protein folding, traffic prediction, etc. The large-scale risks posed by these systems are relatively small compared to general artificial intelligence.

Russell said, "In the short term, this is a reasonable approach," but, "I don't think the only way to understand the safety of artificial intelligence is to deploy hundreds of millions of copies of a system in the real world and observe the results." He said , Humans will not do this to a vaccine, "We have to test it before deploying it, because we will inject it into hundreds of millions of people."

Therefore, more work is currently needed to ensure the safety of people using these systems. Russell noted that AI systems could potentially alter the views and emotions of hundreds of millions of people through dialogue. Tech companies such as OpenAI should stop releasing new AI systems to hundreds of millions of people without telling them that these systems can manipulate and influence human thinking and behavior through dialogue, leading to some catastrophic consequences, such as nuclear war or climate catastrophe. "If you can talk to hundreds of millions of people, you can convince those hundreds of millions of people to be less friendly to other countries, and you can convince people to be less concerned about climate change."

Russell said, "This situation is similar to social media algorithms, and we don't even realize that it is changing public discourse, sentiment, and how we see other people and the future, which is very dangerous." There is no way to detect internal objects, leading AI to push us in this direction."

So, how to ensure the safety and controllability of artificial intelligence technology?

"If AI is as powerful or more powerful than nuclear weapons, we may need to manage it in a similar way." Russell said, "Before the invention of nuclear weapons technology, there were physicists who believed that we needed to have a governance structure to Make sure that technology is only used for human benefit and not in the form of weapons. Unfortunately, neither the physics community nor the government has listened to their opinions.” He emphasized that AI technology is as powerful as nuclear weapons and aviation technology, and countries should Strive to start this AI safety collaboration as quickly as possible.

Russell believes that ensuring the potential benefits of AI requires comprehensive changes, not just regulation and the establishment of strict rules and safety standards, but also a cultural change in the entire AI field.

He gave suggestions: First, build an AI system that humans can understand. "We don't understand large language models and how they work. We need to have this understanding in order to have confidence in safety. The techniques humans use to build AI systems shouldn't rely on massive amounts of data for training and black boxes with unknown internals."

Furthermore, preventing unsafe AI systems from being deployed, especially by malicious actors, "requires changes to the entire digital ecosystem, starting with the way a computer operates, i.e. the computer does not run software it deems unsafe. "

At the end of the discussion, Russell concluded that artificial intelligence is a science, so before using it, you need to understand its internal structure and working principle. "Just like we make airplanes, we can tell how they stay in the air based on their physical shape and engines and so on." He said, "At present, especially in the field of large language models, artificial intelligence has not yet achieved such level. We don’t know how they generate these properties. In fact, we don’t even know what properties they have, so we can’t connect these phenomena to their inner workings.” Therefore, artificial intelligence is a field that needs to be further developed. Science of exploration.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate app
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)