New Congress, new tech, new approach
As the midterm election dust settles and a new Congress prepares for life on Capitol Hill by hiring staff and awaiting committee assignments, the economy, immigration and healthcare will understandably dominate the list of policy priorities. But Members must make time for national security matters, including emerging challenges like artificial intelligence (AI), before it is too late.
Despite ample warning about the national security implications of emerging technologies, Congress has done little to confront this issue. Broadly defined, AI is a system that performs tasks without significant oversight or can learn from its experience to improve performance. AI development is occurring domestically and internationally, both in government and private sectors.
Experts continue to express concern over China’s billion-dollar effort to become a global leader in AI by 2030. Russia is also investing, with President Putin claiming that future AI leaders will rule the world.
Yet in the midst of this growing competition, the United States has yet to produce a national artificial intelligence strategy or adequately prepare for future implications of current developments.{mosads}
The 116th Congress has a new opportunity to critically engage on AI policy and work to ensure responsible and ethical use while still promoting domestic innovation. When approaching this issue, Members should consider the public-private sector schism in development, the need to create artificial intelligence norms, and how to enable effective international coordination.
The significant private sector investments in AI creates a unique situation for the government, which seeks to access the cutting-edge technology. A McKinsey global report found private companies, like Google and Apple, are out-spending government — no small feat — 10:1 on unclassified programs. The government also struggles to compete with tech startups for the limited pool of qualified applicants, many of whom struggle to come to terms with the defense implications of their work.
Google decided not to renew government contracts as a result of employee protests against Project Maven, which they claimed violated Google’s “Do No Harm” principle. However, non-defense projects developed in Silicon Valley are often, according to the Chief Technology Officer at the CIA, “90 percent of the way to a useable military application” — creating a significant risk of misapplication regardless of intent. Congress must work with private industry to bridge the cultural gap between Silicon Valley and the Defense Department to ensure the technology is used as intended. Tech companies may find that working with the government may actually further promote “Do No Harm” efforts by ensuring artificial intelligence is not misused.
As efforts to create effective public-private partnerships move at a slow pace, AI continues to rapidly develop. Current capabilities are limited to “Narrow AI,” which are algorithms limited to specific problem sets. However, as the technology continues to evolve with growing datasets and new quantum computers, AI will move toward “General AI,” which can think and learn like a human.
As AI develops and learns from massive amounts of data, Congress must engage with industry to create effective standards and tests to ensure data is unbiased, unlike a recent test of Amazon’s facial recognition software, which disproportionately misidentified Congressional Members of color as criminals when compared to a database of criminal mugshots.
The international efforts to manage this emerging technology have also fallen flat, with years of meetings in international forums failing to reach consensus on definitions for “artificial intelligence” and “autonomous weapons,” let alone to create international standards. While countries debate definitions, AI continues to evolve with both societal and military implications.
The most diverse Congress in American history has the unique opportunity to confront this complex issue that affects all aspects of defense and society. One possible solution is the development of an international AI center, which could be created through a United Nations Security Council Resolution or through a treaty. With representatives from states, civil society, and the tech industry, such a center could focus on the development of norms, rules, regulations, transparency efforts, and improved communication. Technical experts could coordinate efforts, testing for AI biases and hacking vulnerabilities, and certifying that technology meets certain standards to avoid potential biases or mishaps. The broad sharing of the latest developments in AI technology would incentivize both governments and industry to participate.
An organization like this could also be a forum for the development of formal international agreements. For example, an early focus could be the negotiation of a ban on nuclear autonomous weapons. Not only would this be a commonsense proposal, it would also provide a foundation and opportunity for dialogue between nuclear and non-nuclear states at a time when many strategic stability talks have stalled.
No matter the format for an improved AI dialogue, the possibilities for better cooperation and coordination are real and attainable. The 116th Congress will need to find a way to engage and support such efforts, despite a lack of historical technological inclination. To begin, it should capitalize on the industry’s momentum to foster innovative and responsible use of AI, and develop a national strategy with an eye toward international engagement. There is little time to lose — the opportunity window to both control and support this technology is narrowing. We need to decide on policy and plans before the computers do it for us.
Erin Connolly is a Program Assistant at the Center for Arms Control and Non-Proliferation, focusing on artificial intelligence and youth engagement in national security issues.
Copyright 2024 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed..