The views expressed by contributors are their own and not the view of The Hill

The dangers of tech company ethics

One of the few areas of agreement between Democrats and Republicans in Congress is the need to regulate the tech sector, and in particular large platforms like Facebook and Google that dominate the online ecosystem. 

Yet there are several obstacles to successful regulation, including resistance from tech companies themselves. This resistance takes many forms, not least of which is a tech company’s public embrace of ethical standards.

This month, two different senators have put forward plans to curb the power of large tech companies. On Feb. 10, Sen. Josh Hawley (R-Mo.) unveiled a new plan to overhaul the Federal Trade Commission to “get after Big Tech’s rampant abuses.” 

Then, on Feb. 13, Sen. Kristen Gillibrand (D-N.Y.) proposed the Data Protection Act, including the creation of a new Data Protection Agency to “protect Americans data.”

Tech companies are fighting back in several ways. They are pouring money into lobbying to try and either defeat legislation or ensure that any bill that passes is too weak to affect their ongoing operations. 

In 2018, the last year for which figures are available, Google spent $21.2 million lobbying the United States government, while Facebook spent $12.6 million

This doesn’t count the millions of dollars in funding these companies provide to think tanks in Washington to help amplify their message. 

Even when tech companies say they support regulation, their efforts show otherwise. For instance, Microsoft President Brad Smith publicly called for government regulation of facial recognition, only for the company to then fight against strong facial recognition regulation in its home state of Washington. 

Not surprisingly, the last few years have seen a surge in technology companies publishing their ethical principles, especially as relates to how they will develop and implement robust new technologies relying on artificial intelligence (AI). These principles are meant to show that tech companies are responsible corporate actors, able to look after the best interests of their users and society at large.

According to a recent report from the Berkman Klein Center for Internet and Society at Harvard, “In the past several years, seemingly every organization with a connection to technology policy has authored or endorsed a set of principles for AI.”

At first blush, these principles seem like a positive development, as companies publicly take responsibility for their actions. Google’s Principles for AI are a case in point. When Sundar Pichai, Google’s CEO, announced these principles in 2018, he said: “We recognize that such powerful technology raises equally powerful questions about its use. How AI is developed and used will have a significant impact on society for many years to come.” 

Further, it’s seemingly hard to argue with the principles Google espouses – that AI should be socially beneficial, avoid creating unfair bias, take safety and privacy into account, be accountable and uphold high standards of scientific excellence. Further, Google promises to “work to limit potentially harmful or abusive applications” of AI.  Other tech companies such as IBM and Microsoft have put forward similar principles.

And yet the underlying message beneath all these principles is: trust us. Trust us to decide what is socially beneficial, and what isn’t. Trust us to decide how safe is safe enough, which privacy safeguards are sufficient, and which aren’t. 

At no point do we, the users of these technologies, have a say in answering these questions. 

These ethical principles reinforce perhaps the most dangerous impulse of the tech sector – the belief that those who run tech companies are best placed to make critical decisions that affect all of us. 

Further, what happens when these principles collide with the underlying business model of companies like Facebook and Google? In her book “Surveillance Capitalism,” Harvard Professor Shoshana Zuboff describes how Facebook and Google’s reliance on advertising revenue forces these companies to try and collect more information about all of us ever, whether we use these platforms or not, and regardless of the impact on our privacy. 

Google’s AI Principles state that the company won’t design or deploy AI in ways that “cause or are likely to cause overall harm,” including technologies that “gather or use information for surveillance violating internationally accepted norms.”

It’s a nice sentiment — yet if they do decide to change their mind, there’s nothing that we can do about it.

This is why strong bipartisan regulation of the tech sector is so important. It’s time that we, the people, have a say.

Michael Kleinman is the Director of Amnesty International’s Silicon Valley Initiative. 

Tags Artificial intelligence companies Computational neuroscience Josh Hawley Technology

Copyright 2024 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed..

 

Main Area Top ↴

Testing Homepage Widget

 

Main Area Middle ↴
Main Area Bottom ↴

Most Popular

Load more

Video

See all Video