AI on the verge of collapse – taking our brains with it
AI has been called "the new electricity", but now there are warnings of a bubble about to burst. While billions roll in, technology is starting to pollute itself with digital garbage – at the same time as our ability to think independently is at stake.
The successful launch of Chat GPT in November 2022 made AI the acronym on everyone's lips. Since then, large parts of the global community have tried to find a way to relate to the "new" technology, while AI companies have skyrocketed in valuation.
The type of AI that became popular with Chat GPT is based on the principle that more data is good data. The latest versions of the service are better than the early ones, largely because they are based on more information, i.e. more data.
But now the data is gone. At least the historical human knowledge. Chat GPT and its competitors have already been trained on “everything that is on the internet”.
– These services have used up all the “original” content on the internet so they can only be trained on the new that is generated, says Fredrik Lindsten, a professor at Linköping University who researches machine learning models.
Inhuman garbage
Until recently, the new has been created by humans, regardless of whether it was text, images or video. That is no longer the case. Much, exactly how much is disputed, of what is now being created is created by AI. The trend seems to be escalating as more people use different AI services.
It is at this shift that the first and perhaps clearest AI bubble risks bursting. Because AI creates a completely new type of “garbage” compared to humans.
– Research indicates that when AI trains on text created by AI, the amount of “garbage” increases. At a certain point, it is not possible to guarantee that what AI generates is not completely crazy. The technology behind it is not created to determine what is true or false, says Virginia Dignum, professor of responsible AI at Umeå University.
A research study published in Nature 2024 shows that language models, like Chat GPT, that train on material generated by AI create irreversible errors and the answers come further and further from the original source. The researchers describe this as the answers generated being contaminated at every step.
Although language models risk deteriorating with an increased influx of AI-generated content and training material, this does not mean the end of the AI boom, according to Mattias Rost, associate professor of interaction design at the University of Gothenburg.
– To get further, you need other ways to train. And we have that now, we have new ways to train these models, he says.
Is like a child
Exactly how the underlying changes are made depends on which service or technology you are talking about.
– It is a bit like training a child. We give them tests. If they get the tests right, they don't need to practice any more, if they get the tests wrong, they have to repeat a grade. This works very well in mathematics and programming, says Mattias Rost.
There are studies that show that we reduce certain activity in the brain when we use AI agents. For example, that we become worse at finding things when we constantly rely on the map function of our mobile phone.
Sam Altman, CEO of Open AI, recently said in an interview that he didn't really know how he would be able to take care of his newborn child without the help of Chat GPT. It may have been said with a twinkle in his eye, but the number of individuals who consult a language model is growing.
Cognitive laziness
Olof Sundin, professor of library and information science at Lund University, points to risks when we turn to generative AI in large and small ways.
– If we outsource our cognitive thinking to a chatbot, what happens in the long run? The risk is that we develop cognitive laziness and become more vulnerable to influence, he says.
He points out that universities and colleges have an important role to play when it comes to teaching questioning.
–Knowledge is not a big fact machine that can only deliver straight answers. We need to practice turning and twisting different arguments, being creative and seeing different perspectives, he says and continues:
– Overall, this technology is very unregulated and it is used widely. Compare this to when a new drug is going to enter the market, then it is tested and every little risk is stated.
The change in AI is fast, but not fast enough to satisfy the hunger of the global market economy. And in recent weeks, the International Monetary Fund (IMF), bankers, Wall Street executives and Google CEO Sundar Pichai, among others, have warned of overinvestment in AI – similar to what happened during the dotcom bubble in the early days of the internet.
“I think it will be the same with AI,” Pichai said recently in an interview with the BBC.
Should the economic AI bubble burst, Pichai predicts massive consequences.
“I don’t think any company will be immune (to the effects), including ourselves.
FACTS
What is AI?
AI is short for artificial intelligence. It is a collective name for a range of different technologies that try to make computer programs work more like a human. For example, it involves being able to reason and plan, understand common language, be able to learn from data, recognize patterns and put together several different impressions to reach a conclusion.
An important prerequisite for AI is machine learning. In a common computer program, a human must enter everything that the program should be able to do, program it. The program acts based on a system of rules that is written in its code. Machine learning is a method where a computer program is allowed to create a system of rules on its own by feeding it a large amount of data.
A classic example is image recognition. The reason that the image library in your computer can find images that depict dogs is that the system has been trained to recognize dogs in images.
Source: Internet Foundation.
Inga kommentarer:
Skicka en kommentar