Alternative View: AI Does Not Pose an Existential Risk

Share

There has been urgent concern about the possibility of the destruction of humanity at the hands of AI. Not to mention the urge to call for AI regulation and the open letter to pause the development of the system for six months. The threats that AI poses have an impact on industries, individuals, and society. However, some AI experts have different viewpoints.

A chief AI architect voiced his views about the exaggeration of the threats to existence from AI. The comparisons to pandemics or wars are misleading. Its current state is still far from reaching intelligence or awareness on par with humans. Other tech scientists also agree that AI is not an existential threat, with the addition of it not being conscious and coming up with its goal.

The GPT-4 performed poorly at the game Wordle. It cannot guess the hidden words. Its neural network requires numbers instead of words to map out the inputs and outputs. The GPT-3 struggles with human cognition. When given three situations with two choices each, the result showed that the AI chatbot does not choose the item in each scenario like humans do. It relies on the training data to learn the statistical patterns between words and predict the following words in the sequence. Humans, on the other hand, have five senses for understanding the meaning of scenarios.

A recent study on emergent AI behaviors provides an insight of AI abilities: the emergent behaviors. It is the interaction of simpler systems made up of separate elements results in emerging skills and unanticipated behavioral patterns. Researchers pointed out that they are just the appearance of something that is not there. They asserted that existing claims concerning emergent skills are the outcome of their analyses rather than fundamental changes to how a model acts when carrying out a certain task. The choice of metric and the size of the sample could have an impact on the result in the large language model. Using the nonlinear metric to measure the same data results in an unpredictable and discontinuous result, but not when using the linear metric.

As of now, narrow artificial intelligence (ANI) is still at work. It is designed for specific tasks only. General artificial intelligence (AGI), an AI designed to perform and think at a human level, does not exist yet. However, the next version of GPT-4 will be AGI, or at least that is what the ChatGPT community expected. The question is: will the development of AGI be possible? It may or may not, which raises the question of fixing the current problems of AI chatbots first. The chief AI architect pointed out that the ongoing risks from AI are not impossible to manage. He outlined the process for addressing the hazards associated with it: correct design, thorough testing, and responsible AI implementation.

In conclusion, alternative perspectives from other tech experts give a new perspective on existing AI abilities. It is not dangerous to humanity if humans lose control over it. AI emergent behaviors have not existed yet, not to mention that they have not approached human-level intelligence. The risks of AI are controllable through proper processes. Therefore, why overstate the current condition of AI abilities that could destroy humans? Is there more to the story, such as moving toward techno-authoritarianism or global regulation of AI?

Sources:

Is AI an Existential Threat? Some Experts Say New Warnings Are Overblown

Without a Body, ChatGPT Will Never Understand What It’s Saying

Researchers say AI emergent abilities are just a ‘mirage’

Concern About Shutting Down Sophisticated AI model 

Could artificial intelligence really wipe out humanity?

Should we fear the rise of artificial general intelligence?

Author: maureen l