The views expressed by contributors are their own and not the view of The Hill

Danger of social media business models

Although it seems apparent that the United States is more politically divided today than it has been since the Civil War, social scientists and political observers have been hard-pressed to provide a coherent explanation. Perhaps we are not looking closely enough at the role of the giant social media firms — Facebook and Twitter in particular, but among several others.

Although the Wall Street Journal and NPR have focused attention on the dangers that Facebook creates for children, there is reason to believe that the business models of all the social media firms — by increasing divisions in this country — are causing even more serious problems for U.S. society as a whole.

There’s an expression in the social media field that encapsulates the problem: “If you’re not paying for the product, you are the product.” This idea summarizes the social media business model. They give away their services free of charge, but they are making vast profits. How?

The answer is that they are using specialized algorithms embedded in their systems to discover their viewers’ particular interests. This enables them not only to personalize advertising to a viewer’s interests, but also to present information that will hold their viewers’ attention for extended periods while the advertising is presented. This is the key to the profitability of the social media as well as to the divisions they create in society.

For example, if a Facebook user searches the web for sunglasses, that fact is captured by Facebook’s algorithms and advertising for sunglasses appears between messages posted by friends and other information. But it is how Facebook and others keep viewers’ attention to their websites that should be of concern.

The algorithms used by these firms can determine the political or other interests of each of their users by closely following what articles, subjects or photos capture their attention. This is easily measured by the time they spend with one subject or idea rather than another. The objective of the algorithm is to find the subjects that retain the user’s interest for the longest time (called the “dwell”). This enables the social media sponsor to build a unique picture of every user’s interests.

Notice that the important question for the algorithms — human operators are not involved in this process — is what the user seems to be interested in, determined by how much time he or she spends in reading about or looking at a particular subject. Whether the information the user receives is factual or truthful is not the responsibility or the purpose of the algorithm; it is designed only to keep the attention of the user by continuing to supply the information in which the viewer has been found to be interested.

A recent article in The Washington Post about “vaccine disinformation” noted that 3 million “vaccine-hesitant” people were apparently influenced by “just five users with more than 50 posts each.” How did the posts of five previously unknown people influence such a large group of Facebook users? The answer is that people who revealed themselves as “vaccine-hesitant” were discovered by Facebook’s algorithms and fed the posts by the five people with strong views on this subject.     

It’s easy to see how this could divide the American people.

If someone’s clicks show an interest in, say, criticism of former President Trump’s policies or his opinion of the 2020 election, the algorithms dutifully follow up, furnishing more and more information along these lines as the screen is constantly refreshed. At the same time, a user who reflects a pro-Trump attitude will be flooded with opportunities to read favorable information about the former president. In both cases, the information is not selected because of its truthfulness, but only its relevance to the Facebook user’s interests.

Gradually, users are led down political rabbit holes, believing that they are seeing all the information on a particular subject, when they are actually getting a heavily reinforced diet of only one side of a widely discussed public issue.

In the end, they think they fully understand something that actually has many sides they have never seen. The result is a divided nation, each faction certain they know the truth, when a social media’s algorithms have fed them only a specially selected diet. The success of this system will be demonstrated at tables around the nation this Thanksgiving. 

How do we deal with this?

Repealing Section 230 will never be an answer; it does not get at the business models that have been driving the problem. However, legislation could forbid the use of algorithms that identify and reinforce the interests of social media users. Let the users make the choices, or require a more balanced diet of information on political subjects. Users, of course, could freely exchange their own political opinions, but would then be exposed to the entire range of others’ views, not just those that reflect agreement with their own.

Under these circumstances, free of the algorithmic pressures, Americans would have a chance to engage with and persuade one another on many issues — as a vigorous democracy requires. 

Peter J. Wallison is a senior fellow emeritus at the American Enterprise Institute. His most recent book is “Judicial Fortitude: The Last Chance to Rein in the Administrative State.”

Tags algorithms American democracy American society Business model Donald Trump Facebook political polarization Social media social media companies Social networks Twitter

Copyright 2024 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed..

 

Main Area Top ↴

Testing Homepage Widget

 

Main Area Middle ↴
Main Area Bottom ↴

Most Popular

Load more

Video

See all Video