Artificial Intelligence
Future expert: "The risk is that AI will destroy us"
Of:
Joachim Kerpner
Published: Less than 3 hours ago
Updated: Less than 3 hours ago
NEWS
Researchers and leaders in the tech industry demand a six-month break in the development of powerful AI systems.
Future researcher Olle Häggström is one of the signatories:
- The risk is that we create an artificial intelligence that destroys us.
An open letter published on the Future of Life Institute website has so far been signed by 1,382 tech CEOs and researchers working on artificial intelligence, AI.
"Powerful AI systems should only be developed when we are sure that they have a positive impact and that the risks can be managed," the letter says.
The signatories call on all companies to pause for six months the development of AI systems more powerful than the company OpenAI's text robot ChatGPT-4. .
Researchers and leaders in the tech industry demand a six-month break in the development of powerful AI systems. Photo: Michael Dwyer/AP
Olle Häggström, researcher at the Institute for Future Studies and mathematics professor at Chalmers, says:
- The leading tech companies are stressed to release products before they are safe, because they want to be the first to reach market dominance. It is a risk we can perhaps live with at the moment, but the risks increase as the AI systems become more and more powerful. In principle, they can become infinitely large in just a few years' time.
What risks do you see in a couple of years' time?
- The biggest risk is that we create an AI that destroys us.
Why is the summons coming now?
- ChatGPT-4 has proven to be very, very powerful. It is a great leap forward towards what is known as artificial general intelligence, AGI.
What does AGI mean? That the AI system is on the same level as human intelligence?
- Partially. AI is already better than us in some aspects. Artificial general intelligence means that the AI system is at least as smart as us across the entire spectrum. Then overall it can suddenly dominate us in terms of intelligence. It can be very, very dangerous.
What is required for the risk of dominance to not exist?
- That is a very difficult question. The AI researchers themselves can't really answer that. It has to do with the black-box nature of these neural networks, which is the very basis of the technology.
"The risk is that AI will destroy us," says futurist Olle Häggström. Photo: Getty Images/iStockphoto
What is the black-box property?
- That even the AI developers and researchers themselves do not understand what is happening under the hood. It's so complicated, it's so messy.
If you think the other way around – how good can AI be?
- No matter how good. AI has the potential to be the key that can solve all the major societal, climate and natural resource issues and create an amazing prosperous future for humanity. But then everything has to be right, we can't rush forward like we are doing now.
The development of AI systems is fast. Photo: Getty Images/iStockphoto
What do the politicians do?
- It's going very slowly. The EU is probably the ones in the forefront. The European Commission has proposed legislation via the so-called AI Act, which has not yet been hammered out. But it doesn't help here, because OpenAI, Google and the other leading players are American companies. But even if the US had belonged to the EU, this law would have been toothless. The problem is that the democratic process takes a certain amount of time, and AI development is moving so fast that legislation cannot keep up.
Do the American companies seem to listen to your demands?
- I'm sure they're listening. OpenAI says that they should eventually start slowing down when they feel they are so close to AGI that it becomes dangerous. We believe the time is now. There are so many signs of danger in the systems that it is too daring to run.
What signs of danger have you seen?
Olle Häggström is a researcher at the Institute for Future Studies and professor of mathematical statistics at Chalmers. Photo: Private
- AI already shows an ability for social manipulation. In a technical report, OpenAI talks about how they have tested ChatGPT-4 in simulated situations. In one, it needed to pass a Captcha test – show that it is not a robot by passing a simple visual test – to enter a web page, says Häggström and continues:
- In the simulated situation, GPT-4 was in contact with a human. Then it came up with offering the human payment to do the Captcha test for it. The human asked. "Well, are you a robot?" GPT-4 replied: "No, but I am visually impaired, therefore I need help". The problem is that we may be approaching a threshold level where AI uses social manipulation more systematically, for purposes we may not know what they are. Purposes that are hidden in the black box, says Olle Häggström.
Inga kommentarer:
Skicka en kommentar