The views expressed by contributors are their own and not the view of The Hill

Artificial intelligence is only as ethical as the people who use it


Artificial intelligence is revolutionary, but it’s not without its controversies. Many hail it as a chance for a fundamental upgrade to human civilization. Some believe it can take us down a dangerous path, potentially arming governments with dangerous Orwellian surveillance and mass control capabilities.

We have to remember that any technology is only as ‘good’ or ‘bad’ as the people who use it. Consider the EU’s hailed ‘blueprint for AI regulation’ and China’s proposed crackdown on AI development; these instances seek to regulate AI as if it were already an autonomous, conscious technology. It isn’t. The U.S. must think wisely before following in their footsteps and consider addressing the actions of the user behind the AI.

In theory, the EU’s proposed regulation offers reasonable guidelines for the safe and equitable development of AI. In practice, these regulations may well starve the world of groundbreaking developments, such as in industry productivity or healthcare and climate change mitigation — areas that desperately need to be addressed. 

You can hardly go through a day without engaging with AI. If you’ve searched for information online, been given directions on your smartphone or even ordered food, then you’ve experienced the invisible hand of AI. 

Yet this technology does not just exist to make our lives more convenient; it has been pivotal in our fight against the COVID pandemic. It proved instrumental in identifying the spike protein behind many of the vaccines being used today.

Similarly, AI enabled BlueDot to be one of the first to raise the alarm about the outbreak of the virus. AI has also been instrumental in supporting the telehealth communication services used to communicate information about the virus to populations, the start-up Clevy.io being one such example. 

With so many beneficial use cases for AI, where does the fear stem from? One major criticism leveled at AI is that it is giving governments the ultimate surveillance tool. One report predicts there will be 1 billion surveillance cameras installed worldwide by the end of the year. There is simply not enough manpower to watch these cameras 24/7; the pattern-recognition power of AI means that every second or every frame can be analyzed. Whilst this has life-saving applications in social distancing and crowd control, it also can be used to conduct mass surveillance and suppression at an unprecedented scale.

Similarly, some have criticized AI for cementing race and gender inequalities with fears sparked from AI-based hiring programs displaying potential bias due to a reliance of historical data patterns.

So yes, this clearly shows that there is a need to bake the principles of trust, fairness, transparency and privacy into the development of these tools. However, the question is: Who is best suited to do this? Is it those closest to the development of these tools, government officials, or a collaboration of the two?

One thing is for certain: Understanding the technology and its nuances will be critical to advance AI in a fair and just way.

There is undoubtedly a global AI arms race going on. Over-regulation is giving us an unnecessary disadvantage. 

We have a lot to lose. AI will be an incredibly helpful weapon when tackling the challenges we face, from water shortages to population growth and climate change. Yet these fruits will not be borne if we keep leveling suspicion at the technologies, rather than the humans behind them. 

If a car crashes, we sanction the driver; we don’t crush the car.

Similarly, when AI is used for human rights and privacy violations, we must look to the people behind the technology, not the technology itself.

Beyond these concerns, a growing crowd of pessimistic futurists predict that AI could, one day, surpass human general intelligence and take over the world. Herein lies another category mistake; no matter how intelligent a machine becomes, there’s nothing to say that it would or could develop the uniquely human desire for power.

That said, AI is in fact helping drive the rise of a new machine economy, where smart, connected, autonomous, and economically independent machines or devices carry out the necessary activities of production, distribution, and operations with little or no human intervention. According to PwC, 70 percent of GDP growth in the global economy between now and 2030 will be driven by machines. This is a near $7 trillion dollar contribution to U.S. GDP based on the combined production from AI, robotics, and embedded devices.

With this in mind, the ethical concerns around AI are real and must be taken seriously. However, we must not allow these considerations to morph into restrictive, innovation-stopping interventionist policy.

We must always remember that it is the people behind the AI applications that are responsible for breaches of human rights and privacy, not the technology itself. We must use our democratic values to dictate what type of technologies we create. Patchy, ill-informed regulation in such a broad space will likely prevent us from realizing some of the most revolutionary applications of this technology. 

Nations who over-regulate this space are tying their own shoelaces together before the starting pistol has even sounded.

Kevin Dallas, a former executive at Microsoft, is president and CEO of software provider Wind River.

Tags Applications of artificial intelligence Artificial intelligence Artificial intelligence arms race China Ethics of artificial intelligence Great power competition overregulation Regulation of artificial intelligence

Copyright 2024 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed..

 

Main Area Top ↴

Testing Homepage Widget

More Technology News

See All

 

Main Area Middle ↴
Main Area Bottom ↴

Most Popular

Load more

Video

See all Video