The views expressed by contributors are their own and not the view of The Hill

Entering the singularity: Has AI reached the point of no return?  

The theory of technological singularity predicts a point in time when humans lose control over their technological inventions and subsequent developments due to the rise of machine consciousness and, as a result, their superior intelligence. Reaching singularity stage, in short, constitutes artificial intelligence’s (AI’s) greatest threat to humanity. Unfortunately, AI singularity is already underway.  

AI will be effective not only when machines can do what humans do (replication), but when they can do it better and without human supervision (adaptation). Reinforcement learning (recognized data leading to predicted outcomes) and supervised learning algorithms (labeled data leading to predicted outcomes) have been important to the development of robotics, digital assistants and search engines. But the future of many industries and scientific exploration hinges more on the development of unsupervised learning algorithms (unlabeled data leading to improved outcomes), including autonomous vehiclesnon-invasive medical diagnosisassisted space constructionautonomous weapons designfacial-biometric recognitionremote industrial production and stock market prediction.  

Despite early warnings about the human rights gaps that will be created and the social cost of AI due to the displacement of man as a factor of production, those dismissing AI’s development insist on labeling it as just another technological disruption. Nonetheless, recent steps in optimization of AI algorithms indicate that, beyond the existing theoretical discrepancies surrounding the emergence of technological singularity, we are leaving behind the phase of simple or narrow AI.

As such, it is expected that once machines reach basic autonomy in the next few years, they will be able not only to correct flaws in their performance by articulating better ways to produce better outcomes, but also to do things that humans simply cannot.     

The possibility of soon reaching a point of singularity is often downplayed by those who benefit the most from its development, arguing that AI has been designed solely to serve humanity and make humans more productive.

Such a proposition, however, has two structural flaws. First, singularity should not be viewed as a specific moment in time but as a process that, in many areas, has already started. Second, developing gradual independence of machines while fostering human dependence through their daily use will, in fact, produce the opposite result: more intelligent machines and less intelligent humans.  

We aim to provide AI machines with attributes that are foreign to human nature (unlimited memory storage capacity, lightning processing capability, emotionless decisionmaking) and yet, we hope to control the product of our most unpredictable invention. Moreover, given that the architects of such a transformation are mostly concentrated in very few countries and that their designs are protected either by intellectual property laws or national security laws, control over AI development is an illusion. 

Machine self-awareness begins with ongoing adaptations in unsupervised learning algorithms. But quantum technology adaptations will further consolidate AI singularity by transforming machines’ artificial intelligence into a superior form of intelligence thanks to its exponential ability to connect data to produce better outcomes. Still, machines do not need to be fully conscious and quantum technology does not need to be integrated into AI for the singularity stage to begin. 

From law school admission exams to medical licensing, the use of unsupervised learning algorithms (such as Chat-GPT3 and BARD) show that machines can do things that humans do today. These results, along with AI’s most ambitious development yet (AI empowered through quantum technology), constitute the last warning to humanity: Crossing the line between basic optimization and exponential optimization of unsupervised learning algorithms is a point of no return that will inexorably lead to AI singularity.  

The time for international political action has therefore arrived. Both AI-producer and non-AI-producer countries must come together to create an international organism of technological oversight, along with an international treaty in artificial intelligence setting forth basic ethical principles.   

The greatest risk of all is that humans might realize that AI singularity has taken place only when machines remove from their learning adaptations the flaw of their original design limiting their intelligence: human input. After all, AI singularity will be irreversible once machines realize what humans often forget: to err is human. 

J. Mauricio Gaona is an Oppenheimer Scholar at McGill University, an O’Brien Fellow at the Centre for Human Rights CHRLP in Montreal and a visiting fellow at the Center for Advanced Research HRC Indianapolis.  

Tags AI Artificial Intelligence Artificial intelligence arms race ChatGPT Machine learning

Copyright 2024 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed..

 

Main Area Top ↴

Testing Homepage Widget

More Technology News

See All

 

Main Area Middle ↴
Main Area Bottom ↴

Most Popular

Load more

Video

See all Video