AI is just another technological step; it won’t exterminate humanity or create a dystopia
The recent surge in conversation and debate around artificial intelligence has brought with it a wide range of competing narratives, from pundits and politicians who do not know the limitations of the technology as well as people who are already making countless billions of dollars by marketing the potential upsides of AI.
They are each spinning a narrative about what this all means for the future of work and of our identity as people.
Taking a step back, we must look at the facts, context and precedent for this type of societal disruptor and not fall victim to the fear and frenzy that surrounds any new technological phenomenon and often eclipses the real issues.
Let’s start with the facts. The reality of AI is nothing like the present news cycle suggests. We are no closer to understanding how to make a computer conscious than we were a decade ago, despite the fact that AI systems are vastly improving in their ability to plumb human information and mimic our patterns of communication. We have no reason to believe that, in any foreseeable future, AI computers could have human desires or want to exterminate humanity, much less have the ability to carry out such a nightmare scenario.
But AI has been causing real harm in society for years. Recent advances are only exacerbating these problems, most notably, in the form of wealth inequality, potential job losses (which are now being tracked by the federal government) and racial bias.
Over the past 40 years, automation has caused 50 to 70 percent of changes to wage structure. Current data shows 3,900 jobs were lost last month due to AI. And AI has done more harm than good thus far when it comes to bias; reports show the risk of discrimination is increasing.
These are the genuine threats. So there is a need to prioritize applications that attempt to close the wealth gap rather than exacerbate it, enhance human job functions rather than replace them and fight bias — both by diversifying the tech industry and creating and adhering to a set of diversity, equity and ethics standards.
Conflicting narratives describe AI as so powerful it will either extinguish us or revolutionize the quality of life for the whole planet in just the next decade or two. But the awesome power narrative essentially distracts from the real regulatory steps we need to take, today, to rein in the worst actors from benefiting from AI.
Promoting the story of AI’s explosive growth also contributes to corporate shareholder valuations and generates a frenzy of venture capital investments in startups that aim to have computers replace humans in a host of job categories. Remember the driverless car narrative? Remember massive online open courseware? We have barely finished those pendulum swings of hyperbole, massive venture funding and corporate announcements, and we are already falling into the same trap again. Already, the AI industry is estimated to be more than $200 billion and is expected to become the next trillion-dollar industry by 2028. To face the moment, we need to focus our energy on providing real solutions to address the real threats here and now and stop allowing corporations with a financial stake in the hype to set the agenda for the global AI conversation.
In the past, when technological advances have threatened societal disruptions, our country has not shied away from regulating industries and creating conditions for more ethical practices. We regulate the transportation industry and food production, as examples. We know how to create safer civil engineering and have the tools to do so for AI. When it was mandated that all cars must have safety belts, did the auto manufacturers roll up their assembly lines and close up shop? Absolutely not, and outside of the pandemic years, major automakers have continued to generate handsome profits. In fact, just like then, the companies promoting the regulation of AI will be the same ones who will continue garnering major profits as the industry changes and expands.
So we must take action, starting with regulation to prevent AI’s real harms, rather than being besotted by the fiction of sentient AI and existential risk. At the same time, we need to establish a new set of norms surrounding the technology. For example, scientists could be required to sign onto a code of ethics that includes real penalties for violations. Governments could enact strict rules surrounding grant money; for example, the Biden administration recently established a requirement that universities and researchers who receive federal grants must publish their research findings in an open-access platform. And there could be a new government agency dedicated solely to establishing guardrails and ethics while spurring job creation focused on AI rather than eliminating the human component.
Regulating fast-moving, rapidly evolving AI systems is no easy task, but the European Union recently took an important step in that direction, providing a potential roadmap to policymakers around the world grappling with this problem. Their proposed AI Act would require risk assessments of AI applications being used in such public-facing areas as managing critical infrastructure systems and determining who is eligible for government services or benefits. It would also ban the use of controversial facial recognition technologies, AI tools that have been a key driver in fueling racial bias and discrimination.
Instead of wringing our hands about the danger of AI becoming some sort of doomsday device threatening the extinction of humankind, we need to focus on taking active steps to make sure these tools support humanity. We need to take concrete actions to protect individual privacy rights and ensure AI systems are transparent, so we can better understand their outcomes and guard against the biases that can infect any human-trained system.
AI does not signal the end of the world as we know it. It is just another in a line of transformative technologies that we can effectively manage if we choose to emphasize the importance of protecting human rights over hyperbole and delivering shareholder value.
Rayid Ghani is a distinguished career professor in the Machine Learning Department and the Heinz College of Information Systems and Public Policy at Carnegie Mellon University. Illah Nourbakhsh is the Kavcic-Moura Professor of Robotics, the executive director of the Center for Shared Prosperity and the director of the CREATE Lab at The Robotics Institute at Carnegie Mellon University.
Copyright 2024 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed..