The views expressed by contributors are their own and not the view of The Hill

How do we prevent viral live streaming of New Zealand-style violence?

While the world still mourns the loss of 50 lives in the deadly mass shooting in New Zealand, many questions about the role of big tech, social media and live streaming are being asked. But the live broadcasting of murder isn’t a recent phenomenon.

The ability and potential to show in real time the death of another human being was first demonstrated on November 24, 1963. That’s the day the most infamous man in the world was shot and killed by Jack Ruby. Lee Harvey Oswald was mortally wounded in front of television cameras while the nation was still reeling from the assassination of President John F. Kennedy.

In light of recent events, there are two distinct conversations being held that are of note. The first is the issue of gun control in New Zealand. The second is the issue of the responsibility of big tech and social media companies. While the first will be of obvious local importance for New Zealanders, it is the second that has global implications.

How do you address a global technology that transcends borders and language? Even more challenging, how do you address a society ethos that willingly shares the video showing the mass murder of 50 people?{mosads}

The law has always famously lagged in addressing the rapid evolution of technology. For decades, the only way to wiretap a telephone involved the use of copper lines. The law and the courts never imagined the Internet, cellular phones, Voice over IP (VoIP), WhatsApp, Facebook, Twitter, Slack, email or one of a hundred other methods of modern collaboration and communication.

So, if you’re looking to the law to solve the problems of social media and New Zealand, you had better think again. There’s absolutely no way lawmakers could begin to fathom all the intricacies of modern social media and technology and craft legislation that was actually effective. Exhibit A came from the Senate hearing with Facebook back in May of 2018. 

Recently retired Sen. Orrin Hatch (R-Utah) was clearly flummoxed by how Facebook could sustain a business model in which users don’t pay: “How do you sustain a business model in which users don’t pay for your service?” Kudos to Zuckerberg for not snorting, although it did take three to four seconds before he replied, “Senator, we run ads.”

Exhibit B was provided by the other side of the aisle: Sen. Brian Schatz (D-Hawaii) thought that WhatsApp is used to email friends about the smash hit “Black Panther.” Never mind that WhatsApp is an end-to-end encrypted messaging platform and not email, which Zuckerberg reminded the senator. If it’s encrypted end-to-end, no one — even algorithms — can see inside.

On the other hand, self-regulation by the big tech giants hasn’t exactly accomplished the goals of society either. The only body that has seemed to craft an effective approach has been the European Union and the May 2018 passage of the General Data Protection Regulation (GDPR). Previously small and insignificant agencies will have the power to fine offending companies up to 4 percent of their global revenue. But that’s still after-the-fact. As long as big companies can bear the burden of big fines, little is being done to actually prevent the acts in the first place.

The siren song from big tech and social media companies is that Artificial Intelligence (AI) is the silver bullet that will slay the evil of online content that violates the norms of a civil society.{mossecondads}

In June of 2016, a self-proclaimed ISIS jihadist killed a French police captain and his partner, and then streamed the aftermath to Facebook Live as he contemplated what to do with the three year-old boy who was the only survivor of the attack. Facebook issued a statement, stating “Terrorists and acts of terrorism have no place on Facebook. Whenever terrorist content is reported to us, we remove it as quickly as possible. We treat takedown requests by law enforcement with the highest urgency.”

According to Forbes, Facebook is also “testing reviewing broadcasts that are trending or viral before they are reported and is working on artificial intelligence tools that can interpret and categorize live videos in real time. However, the company isn’t yet using that tool at scale.” So, the question — more than two-and-a-half years later — is why isn’t this ‘tool’ being used at scale? And if it is, why did it fail?

The promise of AI to cure cancer, and help identify and stop crime, isn’t even remotely near realization. The touting of automated tools, machine learning and AI is being used as a stalling tactic to avoid looming government regulation. To be fair, Facebook isn’t the only company struggling with this issue. YouTube, Twitter, Periscope, Instagram and numerous other platforms are all part of the larger big tech and social media conglomerate that has revenue as its main concern.

I propose that there are a couple of common-sense ways to mitigate — at least in part — the risk of broadcasting violent crime or terrorism live from a social media platform. One is through the use of verification. There are numerous platforms that verify that celebrities, authority figures, brands or public figures are who they say they are. For example, the Twitter account for The Hill has the ‘verified’ checkmark.

If you want to broadcast live, your first hurdle would be to have a verified account. Otherwise, just like network broadcasting, there is a delay. This would give the still-evolving algorithms a chance to detect a live stream that could be problematic and put a human in the loop for review. You’d be hard-pressed today to find a ‘live’ event on broadcast television that doesn’t have a delay built into it, and yet the people survive.

This won’t stop everything; nothing ever does. But it can increase the odds that the next New Zealand-style attack won’t get an audience of millions. We’ve come a long way since 1963. We still have a long way to go. The difference between then and today is in the means of production. The only way to broadcast live 50 years ago was to be a television station with very bulky and expensive equipment. Today, a smart phone and a data plan are all you need. Unfortunately, an audience all-too willing to share horrific content seems to be built-in.

Morgan Wright is an expert on cybersecurity strategy, cyberterrorism, identity theft and privacy. He previously worked as a senior advisor in the U.S. State Department Antiterrorism Assistance Program and as senior law enforcement advisor for the 2012 Republican National Convention. Follow him on Twitter @morganwright_us.

Tags A.I. Artificial Intelligence Brian Schatz Criticism of Facebook Facebook Mark Zuckerberg Orrin Hatch Social media Twitter

Copyright 2024 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed..

 

Main Area Top ↴

Testing Homepage Widget

More Technology News

See All

 

Main Area Middle ↴
Main Area Bottom ↴

Most Popular

Load more

Video

See all Video