The New World of Censorship
The internet age has been a boon to free speech in many ways — but it also brings new challenges, including the chilling effect of government surveillance on free expression and the mounting pressure from the European Union on companies like Google and Twitter to censor and monitor speech worldwide. How should we counter these attempts to censor “extremist speech”? DANIELLE KEATS CITRON of the University of Maryland Law School joined Cato’s MATTHEW FEENEY and FLEMMING ROSE to discuss these questions at Cato’s conference The Future of the First Amendment in September.
MATTHEW FEENEY: It’s impossible to talk about surveillance post-2013 without talking about Edward Snowden, the former National Security Agency contractor who released massive amounts of information to journalists concerning the activities of America’s intelligence community. In June 2013 these revelations began to be published, and among the most controversial was the PRISM program, an internet communications surveillance program. PEN America conducted a survey a few months afterward of more than 520 American writers and found that almost a quarter of them deliberately avoided certain topics in phone or email conversations. Pew also did a post-Snowden survey and found that 18 percent of Americans who were aware of the surveillance programs changed the way that they use email.
Alex Mathews from Digital Fourth and Catherine Tucker at MIT tried to isolate a causal relationship by analyzing Google search terms over 2013. They collected 245 search terms — first from a list of search terms that the Department of Homeland Security keeps an eye out for on social media, such as “assassination,” “bacteria,” “burn.” They also got a neutral list of terms and did a crowd-sourcing exercise to isolate potentially embarrassing terms that have nothing to do with national security. Included on this list are “Honey Boo Boo,” “My Little Pony,” “Nickelback,” “nose job,” “sexual addiction,” “suicide,” “Viagra,” “weed,” and “World of Warcraft.” They found that after Snowden, the Google Trends search index fell for search terms that were deemed troubling, from both a personal and private perspective. The Snowden revelations seem to have prompted a chilling effect of search terms not just related to national security, but also embarrassing terms that have nothing to do with the defense of the country.
Jonathan Penney did a similar study by looking at Department of Homeland Security lists and their associated Wikipedia articles. He said, “The large, statistically significant, and immediate drop in total views for the Wikipedia articles after June 2013 implies a clear and immediate chilling effect.” There are limitations to these kinds of studies, but nonetheless, they back up a widespread feeling that the Snowden revelations did prompt some people to change their behavior.
Off the internet, we have other concerning surveillance activities. There’s been a steady uptick in the number of electronic device searches at the border. Customs and Border Protection reserves the authority to search electronic devices belonging to travelers coming into the United States, including U.S. citizens, without probable cause. This authority is being challenged by the American Civil Liberties Union (ACLU) and the Electronic Frontier Foundation (EFF) on both Fourth Amendment and First Amendment grounds, writing in their complaint, “Plaintiffs, and the many other travelers who cross the United States border every year with electronic devices, will be chilled from exercising their First Amendment rights of free speech and association.” Everyone knows that your cell phone and your laptop contain troves of information about your religious or political beliefs, what you’ve been reading, and so on.
In the wake of the Ferguson, Missouri, protests in 2014, body cameras became a staple in police misconduct discussions. But absent good policies, body cameras are tools for surveillance. Police officers should not be able to scour body camera footage in order to conduct surveillance, and body camera footage that does not show incidents such as arrest, use of force, or incidents under investigation should be deleted relatively quickly.
Last year it was revealed that Baltimore police had been using persistent surveillance airplanes for quite a while. What I think is troubling, aside from the surveillance, is that very few people knew that this was going on. The governor of Maryland did not know. Maryland’s congressional delegation did not know. It allows analysts to have what the developer describes as “Google Earth with TiVo.” The Baltimore police can pick up the phone and say, “Hey, we’ve had a murder at 123 Main Street that happened at around 2:00 p.m.” The analysts can then look at 123 Main Street and track the murder suspect back to wherever he’s hiding out. But if they can track murderers, they can track protesters, and if they can track protesters, they can track people attending religious meetings.
Dayton, Ohio, is another city where this technology has been deployed. In 2014 there was an interview with Dayton Police Chief Richard Biehl, and I think his comments should disturb all of us. Biehl said, “I want them [the public] to be worried that we’re watching. I want them to be worried that they never know when we’re overhead.” A cheaper piece of aerial surveillance equipment that is becoming more and more common is unmanned aerial vehicles or drones. This is a trend that we should keep a careful eye on.
Finally, I want to discuss facial recognition. About one in two American adults is already in a facial recognition law enforcement database thanks to the fact that law enforcement has access to Department of Motor Vehicles facial images. This can help law enforcement identify missing persons and wanted suspects — but it can also be used to identify innocent people from a distance, and that undoubtedly could have a very significant chilling effect on First Amendment protected activity such as protests. I would hope that only violent criminals, wanted suspects or missing persons have their data included in these kinds of databases. You never know who’s going to be the target of government surveillance — at the moment it is extremist Muslims, but a fleeting glance at American history reveals that communists, civil rights leaders, the ACLU, folksingers, and Native Americans have all been the target of government surveillance. No one knows in 5 years, let alone 20, who the next target will be.
DANIELLE CITRON: Tech companies like Facebook, Twitter, Microsoft, and YouTube have long understood themselves as protectors of free speech. I’ve worked with Twitter and Facebook on how they respond to stalking, harassment, and threats, and in all of our conversations, the First Amendment and free speech values are always part of the conversation. When Jennifer Lawrence’s nude photos were leaked online and spread all over the internet, Twitter, among other companies, faced a lot of pressure to address non-consensual pornography and after long discussions with advocates had a thoughtful proposal on how to deal with it.
But in the shadow of threats from the European Commission and European regulators, we’re seeing companies responding to hateful conduct and extremist speech in a way that is not thoughtful and changing their speech policies in ways that are really troubling.
In late 2015 after the terrorist attacks in Paris and Brussels, the European Union (EU) Commission and EU regulators made clear that they thought the tech companies were responsible — that they harbored online radicals. The clear message from the EU Commission, as well as from member states’ lawmakers, was, “If you don’t ensure that hateful conduct and extremist speech are removed immediately, you are going to face criminal and civil penalties of an extraordinary order.” Europe doesn’t have any analog to the First Amendment, so these weren’t idle threats, and tech companies responded in two incredibly important ways. In May of 2016, Twitter, Facebook, Microsoft, and YouTube announced an agreement with the EU commission called the Code of Conduct on countering illegal online hate speech.
They agreed that within 24 hours of speech being reported as “hateful conduct,” defined as speech that incites hatred or violence against protected groups, it would be removed under the company’s terms of service. Six months later the same four companies announced that they were adopting a shared industry database that would have hashes — a technique that permits automatic blocking — of violent extremist content that they would all contribute to. The hashing technology would allow the immediate recognition and removal of extremist material.
You might say, OK, so what’s the problem with this? If we remove, let’s say, a beheading video from ISIL, we might prevent violence, right? But at the same time, there is a risk of creeping censorship that I think is qualitatively different from the kinds of pressure that EU countries have put on these companies before.
It’s qualitatively different in two respects. The EU has an internet referral service where they will, on an ad hoc basis, report extremist videos and hateful content that the EU wants taken down. In the past, companies responded by geographically blocking the content. But with the adoption of the hate speech code of conduct, all terms reported by anyone, including government actors and groups acting on their behalf, will be removed through their Terms of Service. And Terms of Service are the same wherever the platform is accessed. So if a government asks Twitter to remove speech as violative of the EU’s hateful conduct code, it will be removed across the globe. Wherever the platform is accessed. Both the industry database as well as the hate speech code of conduct are operationalized on a global basis.
The second concern is that we know that terms like “violent extremism” and “hateful conduct” are vague and ill-defined. Not only could they be subject to censorship creep, it’s happening already. When EU Justice Commissioner Věra Jourová issued a report criticizing the four companies about their failure to remove enough hate speech, she criticized them for not removing online extremism, radical polarization, and “fake news.” One can imagine, of course, that “fake news,” to a government actor, is political dissent, criticism, and other newsworthy content. The creep is not hypothetical. It’s happening.
So, what do we do about it? Companies need to be clear, externally and internally, about what they mean by hateful conduct and violent extremism with specificity and real boundaries. And when governments, or people or groups acting on governments’ behalf, either report terms of service violations for hateful conduct or attempt to contribute data to the industry database, it has to be through a dedicated reporting channel, and the moderators overseeing and implementing the terms of service need to view those requests with a skeptical eye. After being nudged by EFF and other advocacy groups, the big platforms issue transparency reports on government requests. Twitter has reported the countries that have made requests to take down extremist terrorist material, and they should do that not only for terrorism but for hateful conduct.
And last, many media companies have ombudsmen, and part of their task is to think through what’s newsworthy. Increasingly, these platforms do in fact serve as media companies — when there’s breaking news, I go to Twitter. They should have ombudsmen to think hard about how to understand the news because what seems newsworthy tomorrow may seem like terrorist propaganda today. These aren’t easy choices. But all of these platforms need to recognize that they are amid the public discourse and there is going to be newsworthy information that’s going to be taken down. Europe’s not going to stop themselves anytime soon.
FLEMMING ROSE: I think “extremist speech” is an unfortunate term when it is being used in debates about freedom of speech and its limits, because it confuses the boundaries between protected and unprotected speech, between speech that is within the law and speech that is outside the law. Extremist speech means speech that is far removed from the ordinary — but the definition of what is ordinary is always a subjective and political matter. The abolitionist movement in the United States in the 18th and 19th centuries engaged in “extremist speech,” according to the political and social norms of the time. In fact, any speech that challenges the status quo may be denounced as extremist.
This becomes very clear if we look at countries less free than the United States. In Russia, for example, speech classified as extremist and thus prosecuted includes criticism of a governor’s overspending, publishing a poem in support of Ukraine, the Jehovah’s Witness movement in Russia, distribution of Raphael Lemkin’s essays on the concept of genocide, and peaceful protests against court rulings, just to name a few. Words like extremism and moderation contain little meaning in and of themselves. Everything depends on the context. For example, do you prefer a moderate defender of fascism or an extreme supporter of liberal democracy?
A better way of framing the debate is to make a distinction between dangerous and nondangerous speech — speech that represents a clear and present danger of violence or a threat of violence doesn’t deserve First Amendment protection, while a lot of so-called extremist speech does. In doing so we avoid violating a fundamental First Amendment principle — namely, no viewpoint discrimination. It means that white supremacists and Nazis can engage in what most of us would define as extremist speech as long as they do not engage in speech that conveys a true threat.
In most countries, maybe less so in the U.S., opinionmakers take it for granted that the link between evil words and evil deeds is pretty straightforward — that evil words will lead to evil deeds, and in order to prevent this from happening we need less freedom of expression. Therefore, I was rather surprised when some years ago I started looking into the empirical foundation for these claims. There is very little data on the issue — in fact, it’s a hugely underdeveloped research field. But I learned, to my surprise, that Weimar Germany in the ’20s and beginning of the ’30s had hate-speech laws intended, among other things, to protect groups against religious insult, Jews among them. It surprised me because the conventional wisdom, at least in Europe, is that Weimar Germany had too much freedom of expression and that more and tougher laws against hate speech might have prevented the Nazis from coming to power. Most hate-speech laws in Europe today are being justified with a reference to this narrative.
So, what does the relationship between speech and violence look like? First, there is no clear link between hate speech and violence. Hatred of another group isn’t necessarily what drives a person to kill. And second, the widespread understanding that we need to criminalize more speech if we want to prevent religious and ethnic violence isn’t supported by the available data.
The Pew index on social hostilities involving religion tracks attacks on religious institutions and terrorist groups motivated by religion across the world. So far, we have data covering the years from 2007 to 2014, and they indicate that religious violence and conflict don’t increase with freedom of speech. Quite the contrary. There is less religious violence and conflict in consolidated democracies with a robust protection of freedom of the press than in hardcore dictatorships and authoritarian regimes. It is true, though, that there is less religious violence and strife in the most oppressive countries than in softer authoritarian regimes, for instance, the Soviet Union vis-à-vis today’s Russia. I have to add, however, that the most oppressive regimes use a lot of violence to silence individuals and ethnic and religious groups, but due to the fact that these oppressive regimes are closed to the outside world, there may be more violence than we know about.
I think the lesson is that freedom of speech and tolerance don’t come naturally to human beings. Therefore, freedom of speech will always be an endangered species if we do not cultivate it. It takes a long time to create a culture of freedom and tolerance in which differences of opinion are managed without resorting to violence, threats, and intimidation and criminalization of viewpoints. But if we, in the long run, protect fundamental liberties like free speech and are able to manage our differences in a peaceful way, then the risk of violence and mass atrocities will fade. That’s one of the reasons it’s important not to compromise and undermine civil liberties when a country faces security threats and threats of violence.
Share your thoughts with the world by posting a message on the Liberty Tree.
One of the penalties for refusing to participate in politics is that you end up being governed by your inferiors. -- Plato (429-347 BC)
TRY THE LIBERTY TREE AD FREE
"FIGHTING FOR FREEDOM AND LIBERTY"
and is protected speech pursuant to the "unalienable rights" of all men, and the First (and Second) Amendment to the Constitution of the United States of America, In God we trust
Stand Up To Government Corruption and Hypocrisy
NEVER FORGET THE SACRIFICES
BY OUR VETERANS Note: We at The Liberty Tree cannot make any warranties about the completeness, reliability, and accuracy of this information.
Don't forget to follow the Friends Of Liberty on Facebook and our Page also Pinterest, Twitter, Tumblr and Google Plus PLEASE help spread the word by sharing our articles on your favorite social networks.
LibertygroupFreedom
The Patriot is a non-partisan, non-profit organization with the mission to Educate, protect and defend individual freedoms and individual rights.
Support the Trump Presidency and help us fight Liberal Media Bias. Please LIKE and SHARE this story on Facebook or Twitter.
WE THE PEOPLE
TOGETHER WE WILL MAKE AMERICA GREAT AGAIN!
Join The Resistance and Share This Article Now!
TOGETHER WE WILL MAKE AMERICA GREAT AGAIN!
Help us spread the word about THE LIBERTY TREE Blog we're reaching millions help us reach millions more.
Help us spread the word about THE LIBERTY TREE Blog we're reaching millions help us reach millions more.
‼️️ ♻️ PLEASE SHARE ♻️ ‼️️
Please SHARE this now! The Crooked Liberal Media will hide and distort the TRUTH. It’s up to us, Trump social media warriors, to get the truth out. If we don’t, no one will!
Share this story on Facebook and let us know because we want to hear YOUR voice!
Facebook has greatly reduced the distribution of our stories in our readers' newsfeeds and is instead promoting mainstream media sources. When you share with your friends, however, you greatly help distribute our content. Please take a moment and consider sharing this article with your friends and family. Thank you
Please SHARE this now! The Crooked Liberal Media will hide and distort the TRUTH. It’s up to us, Trump social media warriors, to get the truth out. If we don’t, no one will!
Share this story on Facebook and let us know because we want to hear YOUR voice!
Facebook has greatly reduced the distribution of our stories in our readers' newsfeeds and is instead promoting mainstream media sources. When you share with your friends, however, you greatly help distribute our content. Please take a moment and consider sharing this article with your friends and family. Thank you
No comments:
Post a Comment