The views expressed by contributors are their own and not the view of The Hill

Human Rights without humans: The final line between artificial and superhuman intelligences


Human intelligence precedes civilization; artificial and superhuman intelligences, however, will redefine it. Current research in artificial general intelligence (AGI) and intelligence enhancement (IE) seek to remove human error from their most ambitious technological quests. On the one hand, using evolutionary algorithms, AGI aims to develop a fully automated, increasingly independent, gradually cognitive, and eventually conscious artificial being. On the other hand, using neurotechnology, IE intends to create a super-intelligent and inherently different human being capable to counteract the inexorable ascension of machines in the next few years.

But what is the limit of such scientific enterprises? If we develop a conscious artificial being or a super-intelligent human being, what rights then prevail: human rights, artificial- or superhuman- rights? How far should we go to satisfy our intellectual curiosity, our ability to innovate, or other less noble yet often prevailing reasons such as productivity, greed, or power?

Has the time come to develop – in addition to individual human rights (e.g., equality, liberty, human dignity) – a new generation of collective human rights (e.g., equal technological access, human-life preservation, reciprocal income equality, brain-privacy) directed at protecting humans from humans, humans from superhumans, and humanity from extinction?{mosads}

Nothing threatens us more than our decisions. Although our technological progress may lead us to think we live in a modern civilized world, the circular development of human society (e.g., going from supporting Nazis in the 30s to supporting Nazis in 2018) reveals a rather regressive and inhuman tendency.

Knowing that machines will eventually replace most humans as factor of production — which will deprive people across the planet of vital sources of income and increase the already exponential gap of income inequality — how is it we seem to prioritize economic factors such as reduction of labor, health care, insurance, and litigation costs over human rights apprehensions while downplaying existential concerns?

For instance, according to a recent report issued by the World Economic Forum, in the next four years, machines are expected to take over up to 42 percent of all tasks currently being performed by humans. Yet responses to this threat go from enhancing brain-capacity and providing universal basic income to reskilling current-and-future work force — this as millions across the world are still training for jobs that will soon disappear.

Evolutionary algorithms bring AI’s greatest risk yet: machine learning from which AGI’s full development may result. In 2014, Professor Steven Hawking warned about this risk by indicating that: “the development of full artificial intelligence could spell the end of the human race.” Nevertheless, more recently, in an Op-ed published on the Canadian newspaper the Globe and Mail, cognitive psychologist and Harvard Professor, Steven Pinker, dismissed AI risks to humanity as “apocalyptic thinking.”

Although Pinker’s larger argument is reasonable, it suffers from a major fatal flaw: his risk assessment on AI technology hinges on human not machine cognitive behavior. Pinker’s analysis focuses mainly on human nature, bias, and decision-making processes; factors that may become gradually exogenous in AGI’s evolutionary algorithms and self-learned cognitive functions.

As MIT Professor Erik Brynjolfssen explains, AI machine learning provides machines with sometimes a million-fold improvement in their performance enabling them to solve problems on their own. That is, outside human supervision, despite human nature and, ergo the risk.

AGI appears also increasingly juxtaposed to human functionality. In fact, self-driving cars, trucks, trains, boats, and planes as well as customer service, bartender, waiter, firefighter, police, mower, farmer, chef, dentist, medical assistant, lawyer, and journalist robots are being introduced as cost-efficient and more reliable options.

Make no mistake, it is not about whether these technologies should or will develop — particularly when, in many ways, they already have. It is about a defining balance society must make between its scientific and economic ambitions and its existential reasons.

Notwithstanding pundits’ estimations, we cannot really predict how fast and what such intelligences will learn, not when our prime goal was to create independent and improved intelligences. After all, is not that the very risk we are assuming?

Jose Mauricio Gaona is an O’Brien Fellow at the McGill Center for Human Rights (CHRLP), a Saul Hayes Fellow at McGill University’s Faculty of Law, and a Vanier Canada Scholar (Social Sciences, Humanities, and Research Council of Canada SSHRC).

Tags Artificial general intelligence Artificial intelligence Philosophy of artificial intelligence Superintelligence

Copyright 2024 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed..

 

Main Area Top ↴

Testing Homepage Widget

More Technology News

See All

 

Main Area Middle ↴
Main Area Bottom ↴

Most Popular

Load more

Video

See all Video