AI is going to change the world — but who will be leading that change?
Tech’s glitterati converged on Washington D.C., earlier this month to participate in an AI congressional summit produced by Senate Majority Leader Chuck Schumer, to begin considering an AI regulatory scheme. So, do you prefer Elon Musk, Mark Zuckerberg, Bill Gates or Sam Altman deciding your future?
Before you scoff, remember the alternative is a future authored by tech-challenged legislators anxious to build a new AI bureaucracy. Are we in more danger if Congress acts or if it doesn’t?
Today’s AI explosion — more accurately, a convergence of large language machine-trained models — is pointing us in the direction of an AI inflection point that will raise life-altering choices on a scale never before seen. It is increasingly being used to improve cities, fight wars, create safer means of transportation, grow food, enhance health care, redesign how legal outcomes are reached, produce financial results, revolutionize manufacturing, and educate our children. In the future, it may cure cancer just as easily as it may enable oppressive governments to ensnare their citizens in perpetual servitude. China has deployed 300 million facial recognition cameras which, along with mobile mass surveillance apps, are collecting and analyzing vast amounts of citizen data to produce social scores. A low score can earn a trip to a reeducation camp. China and the many countries importing its technology increasingly see AI as a bridge to perpetuate political power.
Tech execs who have come to Washington tell us that what they are doing may eliminate jobs, cultures and humans and ask to be regulated almost as if it is the permission slip they need to keep doing what they are doing. Everyone knows that to avoid onerous government regulation that may squash innovation, lip service to governance and oversight is an important opening move.
Indeed, 33,000 “experts” signed an open letter calling for a six-month pause in the training of programs more powerful than ChatGPT4. Their stated intention is to allow for the development and implementation of a set of shared safety protocols that can be rigorously audited and overseen by independent outside experts. That won’t happen. This may be a rare showing of technological rationality, but may also just as likely be a thinly veiled effort to slow down Open AI’s ChatGPT until its competitors can catch up.
Whatever the case, there is universal concern over how AI can change our lives, particularly since there are no rules of engagement or oversight with regard to where AI goes, how it gets there, and who controls it. There are rules and licenses for everything in America, but not AI. That should be concerning, since whoever dominates this technology will gain enormous power and influence over our lives. So as the AI magical mystery tour proceeds, several important principles should be followed.
AI is as dangerous as it is awesome
The irrational exuberance that tends to accompany new technological products and trick us into lowering our guards is readily apparent with AI. More than 100 million people used ChatGPT in its first 60 days, most likely spending less time learning about it than they would if they were considering buying a new microwave oven. AI is as intoxicating, mesmerizing and addictive as tech comes and will no doubt become the principal weapon of choice for online thieves, hostile nations and terrorists. It needs supervision.
The role of AI regulation
Legislators seem to equate the need to stay ahead of China with the need to regulate AI. Those are two very different things. Markets, not bureaucrats, should decide how AI evolves. The creation of an AI bureaucracy should be a regrettable byproduct of attaining AI dominance, not a goal.
Time is of the essence
Technology will wait for no one. We can’t wait for government agencies to take two years to conjure AI rules that will likely be inadequate and obsolete the day they are published. Even Vladimir Putin has figured out that whoever dominates AI will rule the world.
New technologies require new methods of regulation
Governments can’t hope to do it by themselves. AI regulation must reflect the knowledge and experience of a broad consensus of industry experts, computer engineers, technology leaders, security specialists, the defense industry, financial and payments systems, academics, legislators, lawyers and users. It requires a new model that augments the “cops-n-robbers” style of regulation with a cooperative form of oversight based on joint public and private-sector responsibility for data collection, analysis, oversight and decision-making.
AI is a complicated global challenge. The United States cannot come in second in the AI sweepstakes. While it is still the world’s tech leader and has the largest economy and most powerful military, it must take the lead, as it did at Bretton Woods after World War II. If the EU wants to regulate an AI industry it doesn’t have, or other parts of the globe want to play in this arena, it should be on a cooperative basis following a common playbook for democratic nations.
Twenty-five years from now, someone may ask where you were when artificial intelligence changed life as we know it. When that happens, the quality of our lives will have been determined by how much AI was used to cure debilitating diseases and how much to manufacture the tools of dictatorships. Our adversaries are sprinting toward AI dominance — if they get there first, we may not get the opportunity to regret it.
Thomas P. Vartanian is the author of “The Unhackable Internet” (2023). He is a former federal bank regulator and currently is executive director of the Financial Technology & Cybersecurity Center.
Copyright 2024 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed..