The views expressed by contributors are their own and not the view of The Hill

How to build ethical AI

Getty Images


You’ve likely already encountered artificial intelligence several times today. It’s an increasingly common technology, in cars, TVs, and, of course, our phones. But for most people, the term AI still conjures images of The Terminator.

We don’t need to worry about hulking armed robots terrorizing American cities, but there are serious ethical and societal issues we must confront quickly — because the next wave of computing power is coming, with the potential to dramatically alter — and improve — the human experience.

Full disclosure: I am general counsel and chair of the AI Ethics Working Group at a company that is bringing AI to processor technology in trillions of devices to make them smarter and more trustworthy.

Enabled by high-speed wireless capacity and rapid advances in machine learning, new applications for artificial intelligence are created every day. For technologists, it’s an exciting new frontier. But for the rest of us, we’re right to ask a few questions. In order to realize the full benefits of artificial intelligence, people must have trust in it.

Governments across the world have started to explore these questions. The United States recently unveiled a set of regulatory principles for AI at the annual Consumer Electronics Show in Las Vegas. And U.S. Chief Technology Officer Michael Kratsios spoke on the CES stage about the importance of building trust in AI. But what does that really mean?

AI is already here in ways that many don’t even realize, from how we get our news, to how we combat cyberattacks, to the way cell phone cameras sharpen our selfies. Eventually, AI will enable life-saving medical breakthroughs, more sustainable agriculture, and the autonomous movement of people and products. But to get there we must first tackle important, societal issues related to bias, transparency, and the massive amounts of data that feeds AI. Citizens must be able to trust that AI is being implemented appropriately in all of these areas.

As search and social media companies are facing a so-called “techlash” around the world, we should learn the lessons from today’s privacy debate and grapple with these issues on the front-end, before AI becomes fully rooted. 

We need to adopt a high set of standards of behavior that promote trust and ensure ethics are built into the core of the technology in the same way security and data privacy drive our engineering today.

Any standard-setting in this field must be rooted in the understanding that data is the lifeblood of AI. The continual input of information is what fuels machine learning, and the most powerful AI tools require massive amounts of it. This of course raises issues of how that data is being collected, how it is being used, and how it is being safeguarded.

One of the most difficult questions we must address is how to overcome bias, particularly the unintentional kind. Let’s consider one potential application for AI: criminal justice. By removing prejudices that contribute to racial and demographic disparities, we can create systems that produce more uniform sentencing standards. Yet, programming such a system still requires weighting countless factors to determine appropriate outcomes. It is a human who must program the AI, and a person’s worldview will shape how they program machines to learn. That’s just one reason why enterprises developing AI must consider workforce diversity and put in place best practices and control for both intentional and inherent bias.

This leads back to transparency.

A computer can make a highly complex decision in an instant, but will we have confidence that it’s making a just one?

Whether a machine is determining a jail sentence, or approving a loan, or deciding who is admitted to a college, how do we explain how those choices were made? And how do we make sure the factors that went into that algorithm are understandable for the average person?

If we are going to give machines the ability to make life-changing decisions, we must put in place structures to pull back the curtain and reveal the decision-making behind the outcomes, providing transparency and reassurance.

The reality is that if the private sector doesn’t address these issues now, a government eventually will. But with the rapid rate of innovation in machine learning, regulation will always have a hard time keeping pace. That’s why companies and non-profit enterprises must take the lead by setting high standards that promote trust and ensuring that their staff complete mandatory professional training in the field of AI ethics. It is essential that anyone working in this field has a solid foundation on these high-stakes issues.

We are a far way off from machines learning the way a human does, but AI is already contributing to society. And in all its forms, AI has the potential to positively augment the human experience and contribute to an unprecedented level of prosperity and productivity. But the development and adoption of AI must be done right, and it must be built on a foundation of trust.

Carolyn Herzog (@CHerzog0205) is EVP, General Counsel and Chief Compliance Officer at Arm — a tech company that brings AI to processor technology — where she leads the company’s ethical AI initiatives and serves as Chair of the AI Ethics Working Group.

Tags Artificial intelligence Ethics of artificial intelligence Philosophy of artificial intelligence Technology

Copyright 2024 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed..

Main Area Top ↴
Main Area Middle ↴
Main Area Bottom ↴

Most Popular

Load more

Video

See all Video