All papers examples
Get a Free E-Book!
Log in
HIRE A WRITER!
Paper Types
Disciplines
Get a Free E-Book! ($50 Value)

Towards a More Meaningful Transparency, Research Paper Example

Pages: 23

Words: 6417

Research Paper

Taking A Close Look at The Latest Twitter Transparency Report

Introduction

Global tech companies, especially the ones that in their nascent years vowed not to be sinister or nefarious in their endeavors, find it extremely arduous to select and operationalize the right compliance model when dealing with foreign autocrats. It is more an art than a science: on the one hand, submissive compliance with local censorship or surveillance may trigger a negative backlash among shareholders and policymakers back at home. On the other hand, trying to be a standard bearer for human rights in an autocratic environment by pushing back on excessive removal or user data requests from local authorities may result in the blocking of the entire platform there; Telegram, LinkedIn and Zello have recently gone through a similar experience in Russia.

Transparency reports, which tech giants publish once or twice a year, represent one instrument in a broader compliance toolkit. They can serve one of the two goals, although they unequivocally should not serve both: either to shine a spotlight on government censorship, or to demonstrate to local authorities that the company is serious about compliance with their demands. William Echikson, a former senior policy manager at Google, recently argued that in the case of his former employer these reports have now begun to serve the latter: “to convince authorities in Europe and elsewhere that the internet giant is serious about cracking down on illegal content. The more takedowns it can show, the better.”

Since 2010, as telecommunications and internet companies have made strident efforts to build user trust and set themselves apart in burgeoning and competitive industries, the transparency report has emerged as an increasingly common tool used to do so. Transparency reports essentially publish aggregate data regarding government requests for user information and data, takedowns that are intellectual property-related, and government mandates to remove content, which offers firms a public-facing opportunity to display their commitments and values to safeguarding user rights. As such, these reports have been viewed as an extension of extant corporate social responsibility practices, and, similar to other corporate social responsibility (CSR) practices, transparency reports constitute a ripe opportunity for firms to put their corporate values on display and into practice. Similar to other CSR practices, transparency reports are not mandated by either the industry or government, as their emergence has been both voluntary and organic.

In its last transparency report covering the second half of 2017, Twitter reported that its highest compliance rate (51%) with government content removal requests was in Russia–significantly higher than in any democratic country (e.g., Germany (9%), France (3%) or the US (0%)).

Table 1. Russia removal requests

Report Removal requests (court orders) Removal requests (government agency, police, other) Percentage where some content withheld Accounts reported Accounts withheld Tweets withheld Accounts (TOS)  
July – December 2017 0 1,292 51% 1,306 23 630 224  

Source: Russia – Twitter Transparency Report ( https://transparency.twitter.com/en/countries/ru.html)

The company received 1,292 requests from the Federal Service for Supervision of Communications, Information Technology, and Mass Media (Roskomnadzor), a Russian government Internet watchdog. As a result, it withheld 23 accounts and 630 tweets, and none of these removal requests came from courts. According to the report, all the requests were issued “under Law 149-FZ, which covers reports of suicide promotion, extremism, gambling, illegal drugs, and child pornography.” Other than this reference to Russia’s infamous censorship law and an example of one tweet related to a Ukrainian far right organization, the company provided no other details as to the types of content it withheld.

This high level of compliance in an autocratic environment like Russia clearly stands out and merits a deeper analysis. According to the latest 2018 Corporate Accountability Index, the world’s top tech companies still do not disclose enough data about government requests for content restriction, albeit Twitter has higher rankings that many other companies. In this article, we attempt to partially remedy that gap by identifying the types of content Twitter actually restricted in Russia in response to government demands and which content was blocked locally as opposed to removed globally. We conclude by suggesting several steps Twitter’s management could take towards making sure that its transparency reports still serve the purpose of holding governments accountable, “especially on behalf of those who may not have a chance to do so themselves.”

Literature Review

Online social media networks such as Twitter and Facebook have enabled persons to not only employ the platform for interaction with other users but also to read and share news in addition to engaging in political discussing or discussing significant events. Furthermore, the proliferation of smart phones has further enabled the use of such platforms by providing citizens a platform to communicate without any geographical or temporal limitations. Politicians and governments have acknowledged how social media has emerged as a rich medium of communication (Romero, Meeder, & Kleinberg, 2011). Indeed, social media has emerged as essential tools in political campaigns, which was first discerned in the 2008 presidential elections in the United States when Barack Obama’s campaign effectively utilized Twitter to post updates related to his campaign, granting his followers with opportunities to volunteer (Baumgartner et al., 2010). In addition to being tools for politicians to galvanize a greater following, social media allows government institutions to engage in candid communication with their citizens, thereby enhancing the transparency and openness of their organizations (Lorenzi et al., 2014; Bertot, Jaeger, & Grimes, 2010). As Heverin & Zach (2010) reported, a panoply of studies has demonstrated that from police departments to civic services, public engagement and information sharing via Twitter can enhance transparency and lead to greater confidence of citizens on their local institutions and state.

Internet Content Blocking

The utilization of internet blocking by governments to prevent access to content that is illegal is a burgeoning and worldwide trend. There is a myriad of reasons why policymakers want to block access to some content, including intellectual property, national security, online gambling, and child protection. Unfortunately, besides the issue of child pornography, there is no international consensus regarding what is appropriate content from a public policy vantage point. The internet continues to be an integral component of policy discussions and democratic processes even though the reality is that the internet contains content that government agents and policymakers, regulators, and legislators across the globe want to block. While the motivations and reasons for internet blocking are beyond the scope of this paper, it is nonetheless important to be transparent about blocking content on the internet in addition to the underlying policies and objectives. National authorities should ensure that impacted users are granted the opportunity to raise concerns regarding the adverse effects on their interests, rights, and opportunities.

Emergence and adoption of transparency reporting

Internet companies first began publishing transparency reports in 2010 when Google published its initial report, which focused on requests made by the government for content and data to be taken down both in the United States and across the globe. China played an integral role in rendering that decision in the same way that it figured prominently in Google’s first-mover offerings of two-factor authentication and transit encryption by default. Google articulated a desire, much like a panoply of other firms and privacy proponents, to reform laws within the United States that governs when law enforcement possesses the clout or authority to demand information from online service providers.

Over the course of the next three years, major companies including Microsoft and LinkedIn slowly emulated the example set by Google, like Twitter, where former Google employees sought to build on the work done by their former employer. However, it was not until Edward Snowden shared various revelations about the NSA surveillance in the middle of 2013 when the practice became more standard. Indeed, Snowden’s revelations initiated a crisis in consumer confidence, particularly around how American companies handled private data. Within a year of the Snowden revelations, the preponderance of all major online service providers, including cable and phone companies, published comprehensive reports about government mandates for data. Such an explosion in reporting happened simultaneously with a major political and legal fight where privacy proponents and the Internet industry collaborated to demand that firms retain the ability to publish rudimental numerical information pertaining to requests related to national security received from the U.S. government, which underpinned a myriad of transparency reforms that manifested in the USA Freedom Act of 2015.

As the revelations made by Edward Snowden accelerated the adoption of transparency reforms, transparency reporting has transformed from something just one firm did in 2010, and which only a few companies did prior to 2013, to a standard practice that over 50 U.S. telecommunications and internet companies did in addition to a burgeoning number of international firms engaged in. With transparency reporting becoming a standard practice within the internet industry, other features have become standard as well, including a section that includes government requests for customer or user data. This feature is unsurprising due to the close link between the Snowden revelations and the emergence of transparency reporting. For example, the preponderance of internet companies publishes information about takedowns as a result of the Digital Millennium Copyright Act. Another salient feature of transparency reports is providing information about government mandates to remove or take down content for a variety of purposes, although firms have not yet published in-depth information about the content that is taken down based on their own terms of service. The demand for such reporting nonetheless continues to grow.

Key Factors Underpinning the Adoption of Transparency Reporting

An amalgam of factors—such as human rights threats emanating out of China, calls for transparency from privacy and human rights proponents, and the impulse to reform privacy law not only in the United States but in countries across the globe—facilitated the adoption of transparency reporting. However, the most potent factor in the diffuse adoption of transparency reporting, as mentioned previously, were the revelations made by Edward Snowden. While a small handful of firms already engaged in the practice of transparency reporting, after the summer of 2013, there was an exponential increase in how many companies published transparency reports on a regular basis. Every company that engaged in the practice of transparency reporting put more and more pressure on its competitors, which catalyzed a domino effect that ended in nearly all companies within the internet industry in the United States adopting this practice.

Civil society organizations both prior to and following the Snowden revelations wielded a lot of influence in the adoption of transparency reporting. One of the most influential civil society organizations was the “Who Has Your Back?” project spearheaded by Electronic Frontier Foundation, which was essentially a score card that ranked various internet companies starting in 2011 in terms of their free speech-related policies and their privacy policies, which included whether or not they published a transparency report (EFF, 2014). Such an endeavor, in addition to newer efforts such as the international Ranking Digital Rights Corporate Accountability Index, further amplified competition among firms attempting to rebuild consumer trust across the globe following the Snowden revelations. Changes made to the law following the Snowden revelations granted firms new latitude to report on and publish information about national security requests that they had hitherto been prohibited from sharing, thereby encouraging further publications of reports. Global companies are currently under more and more pressure to issue these reports since major companies like Vodafone and Deutsche Telekom have adopted transparency reporting in addition to global multi-stakeholder bodies including the Freedom Online Coalition continue to push for this practice to be adopted on a macro scale.

What is the Future of Transparency Reporting?

Over 50 American telecommunications and internet companies have adopted the practice of transparency reporting in the few short years subsequent to Google releasing its initial report. With every iteration, several firms have added new content, features, and data points. While it is difficult to predict what the future generations of transparency reports will present, there are a few domains that will be enhanced in the future, including more standardization in reports as well as the enforcement of terms of service. What has been discernibly absent from essentially every published transparency report was data bout the enforcement of terms of service. It was not until Etsy published its transparency report in July 2015 that any company reported information on the enforcement of the company’s internal policies. Only Twitter has since shared data about terms of service enforcement as a section in its transparency report; at the outset of 2016, Twitter added data about particular types of content removed as a result of violations of the social media platform’s policies. Particularly as a result of concerns regarding extremist or terrorist-related content on the internet, transparency around a company’s internal policies that affect the freedom of expression of its users has become increasingly important, and there has been burgeoning pressure peripheral to the industry—and some desire within the industry—to proffer transparency as soon as possible.

While transparency reporting has succeeded in recent years, and firms have experimented with new and innovative approaches to such reporting, the practice has unfortunately suffered from a lack of consistency that has blunted the utility of transparency reports. Because of different reporting and accounting practices, and a lack of clarity regarding how firms are defining or counting particular terms, it is hard for data across various companies to be combined or compared. As such, there continues to be a burgeoning call for companies to standardize how the most salient features of transparency reporting are employed. The Transparency Reporting Toolkit has emerged as one representative effort, which was designed by the Berkman Klein Center for Internet & Society and the Open Technology Institute. This toolkit offers an archetypal standardized reporting format predicated on a comprehensive survey of the most optimal practices in the field.

Twitter Transparency and Government Restriction of Free Speech Online

Any set of policy recommendations always calls for greater transparency whether they are related to understanding how firms employ user data, discussing reform of government surveillance, or devising technical standards that facilitate the detection of censorship (Llanso, 2017). Transparency is not a mere end in itself; instead, it is an integral mechanism for comprehending the forces that inform people’s online experiences. Twitter’s most recent transparency report is a prime example of just how much information can be gleaned from transparency reports published by companies. Indeed, Twitter published new information regarding the complex interactions that popular social media companies like Twitter and Facebook can have with governments—especially those that are autocratic in nature—that seek to restrict or remove content online.

Twitter’s transparency report includes four various types of interaction with government processes, each of which may result in the limitation or removal of content from user accounts. Indeed, there are several vectors for government demands, the first of which is an official legal process that is considered a matter of law. Twitter has emerged as the paradigm case of government mandates for the restriction of content online, which involves law enforcement obtaining a court order asserting that certain activity or content is illegal and serving the order on a company—in this case, Twitter. The company subsequently takes into consideration the validity of the order to decide if it will limit access to the content in a particular country. In Twitter’s report, the overwhelming majority of the court orders received by Twitter came from Turkey, although content was removed or withheld in only 19% of the court orders generated. Furthermore, official legal process may incorporate requests that are not issued by the court but rather by some other formal procedure. The report shows that Twitter receives over five times as many administrative orders as it does court orders; such a comparison is quite useful for both users and proponents because it can assist them in pinpointing the most prevalent mechanisms that governments employ to place further restrictions on speech.

A second type of interaction is an official legal process that is considered under the Terms of Service. In 2016, Twitter began publishing the number of official legal requests it received due to violations of its Terms of Service rather than a legal matter. Such data is reported in the Accounts TOS column and proffers more trenchant insight into the way in which official legal orders may cause content to be removed. For example, as will be discussed in greater depth in this article, Twitter disclosed that hundreds of accounts were impacted when Twitter reviewed official legal requests and determined that the content published on them violated the company’s terms of service. Such information sheds light on how the Russian government succeeded in removing speech from the internet even though it lacked jurisdiction or the speech did not actually violate the law.

The third type of interaction category is a report from a non-governmental organization rendered a matter of law. Listed as EU Trusted reporters, this new category provides data on instances where the company was notified by an affiliated European NGO that an account or tweet infracted local laws against hate speech. Twitter participates in the European Union’s Code of Conduct on Countering Illegal Hate Speech Online, as the company, along with Facebook, YouTube, and Microsoft, came to an agreement with the European Commission “to review the majority of valid notifications for removal of illegal hate speech in less than 24 hours and remove or disable access to such content, if necessary” (Llanso, 2017). There have been weighty concerns raised about the European Union’s Code of Conduct since it circumnavigate the judiciary when determining the presence of illegal speech, thereby including no remedy mechanisms or accountability as part of the commitments made by governments and by companies. Twitter’s latest report does very little to alleviate these concerns, but it still elucidates how, as a company participating in the Code, it does offer some transparency. It is imperative that Twitter and other companies not acquiesce to the demands to take down content that does not have any rationale for doing so. Such a policy approach calls for a demand for more information from companies like Twitter participating regarding how the Code actually operates in practice.

The final category on Twitter’s report involves government terms of service reports, which includes data about instances when government officials ask Twitter to remove or withhold content under the company’s terms of service, particularly Twitter’s terms against the promotion of extremism or terrorism. These referrals come to the company via their standard customer support intake channels rather than through court orders or other legal injunctions. Such referrals involve government agents surveilling or flagging content in the same way that regular users do. Almost six thousand accounts on Twitter were reported in this way, and in the over 80% of these cases, Twitter did take action against the reported accounts. This has emerged as the primary way that government censorship of the internet has taken place, a process that is not without friction. Because of process safeguards, including a review by an autonomous arbiter and the fact that the impacted speaker is granted a forum to appeal, there are oversight mechanisms in place that force governments to be accountable and certify that officials cannot completely censor speech that is protected.

Government endeavors to pursue the withholding or takedown of content continues to a major concern for the freedom of speech and expression on the internet. Transparency reports from internet firms like Twitter and Facebook provide insightful information that fuels public debate. Twitter has expanded its report to include crucial information that will assist journalists, proponents, and users in understanding the interplay between companies and governments over the restriction of content. Furthermore, extant research shows that internet companies like Twitter should consider releasing information about enforcement actions related to terms of service.

Methodology

We used the Lumen database as a primary source of data to obtain copies of all removal requests that Twitter received from Roskomnadzor during the period in question. Lumen is an independent research project that studies cease and desist letters concerning online content. It allows researchers to find and sort removal requests by country, sender, recipient, and date, and acts as a critical resource for civil society and academic researchers who study censorship trends around the globe and removal practices of internet companies.

First, between April 10, 2018 and April 26, 2018 we conducted a number of searches on https://www.lumendatabase.org and identified all relevant removal requests that would meet the following criteria:

  • Sender: “Roskomnadzor” OR “Roscomnadzor” OR “Federal Service for Supervision in the Sphere of Telecom, Information Technologies and Mass Communications”
  • Recipient: “Twitter”
  • Sent on: Between July 1, 2017 and December 31, 2017

Second, we extracted a description of allegedly illegal content and Twitter URLs mentioned in the requests and coded the description using the typology of prohibited content set forth in the above-mentioned “Law 149-FZ.”[1]

Third, we checked the status of the reported URLs for accessibility both from Russia (St. Petersburg, internet provider Rostelecom) and from outside of Russia (Lviv, Ukraine, internet provider Kyivstar). We used these two locations in order to: 1) identify which content was blocked locally and which was removed globally, and 2) determine whether, when Twitter chooses to withhold content locally, it does so based on a user’s physical location or the country setting in a user’s profile.

Finally, we consulted with the Unified Register of Domain Names, Internet Website Page Locators, and Network Addresses that Allow to Identify Internet Websites Containing Information Prohibited for Distribution in the Russian Federation (“Unified Register”)

Maintained by Roskomnadzor. Under Law 149-FZ, when content is found to be illegal either by an administrative agency or by a court, the decision is reported to Roskomnadzor, which then sends a request to a website owner and / or a hosting provider, and requests that identified web pages or materials be blocked or removed. If the content is not promptly removed or blocked, Roskomnadzor adds an identifier of the website (domain name, URL address, or IP address) to the Unified Register, which automatically triggers an obligation for telecom operators throughout the country to block access to that website for their users.[2] Thus, the presence of Twitter’s URLs in the Unified Register would signal that the company pushed back on the Roskomnadzor’s requests to block or remove them.

Data analysis

What Twitter uploads to Lumen

According to Twitter, unless it was prohibited from doing so, it uploaded all actioned requests to Lumen. The company defines “actioned” as an instance where “Tweets and/or accounts were withheld or the content was removed”.[3] However, we found requests on Lumen where the company did not comply with government’s demands and did not withhold or otherwise restricted the reported content in Russia.[4] This means that Twitter did not take action on all requests it uploaded to Lumen.

In Lumen’s database we identified a total of 801 requests received by Twitter from Roskomnadzor during the second half of 2017. From these we removed a number of entries:

  • 14 duplicate entries with the same date that referred to the same URL (s) and the same underlying decision of a government (we assumed they were mistakenly sent twice either to Twitter, or by Twitter to Lumen);[5]
  • 5 notices where Roskomnadzor informed Twitter that a previously reported content had been taken off the Unified Register. In our view, these do not qualify as removal requests and thus were excluded from the total population;
  • 7 entries that did not contain any “Supporting documents” and therefore could not be examined in detail.[6] 3 of these entries had an option to request “Supporting documents” and “check back in 7 days.”[7] We requested and waited but to no avail;
  • 1 request sent by Turkish authorities (although it was uploaded to Lumen with the name of Roskomnadzor as a sender).

This resulted in a total population of 774 of unique removal requests, with one reported tweet per request. This number is neither 1,292 (the total number of requests Twitter reported it received from Roskomnadzor, as shown in Table 1), nor 51% of 1,292 (the percentage of “actioned” requests where some content was withheld according to the transparency report). There are at least two possible explanations for this mismatch: either our search query did not capture all relevant government requests, or Twitter’s content moderators were not always consistent in what they sent to Lumen.

What types of content Twitter blocked or removed

Most of the Roskomnadzor’s removal requests were related to suicidal content (Table 2). In 75% of these requests, Twitter blocked the reported tweets for Russian users.  Russian law prohibits the dissemination of content that describes ways of committing suicide as well as calls to commit suicide. Similarly, Twitter’s Rules forbid content that promotes “suicide or self-harm.” Still, as the Table 3 demonstrates, that prohibition was not universally enforced. The tweets that were withheld for Russian users were left available for everyone else. In four cases Twitter did not comply with the Roskomnadzor’s requests and kept the tweets up in Russia. It’s hard to understand the rationale as the content of these tweets, in our view, is clearly suicidal. [8]

Table 2. Removal requests by type of content and taken action

Content type For users in Russia For users outside of Russia
  suspended private no page blocked available suspended private no page blocked available
Calls for public protests     3   4     3   4
Child porn 6   8 23   6   8   23
Extremist       3           3
Gambling       5           5
Hatred       1           1
Suicidal 60 15 103 539 4 60 15 103   543

“Suspended” – the whole account shows as “Account suspended.”  In the case of “suicidal” and “child porn” content, 66 accounts were suspended, but it is not clear whether the suspension was the result of a request from Roskomnadzor or another violation.

“Private” – the page shows a notification “This account’s Tweets are protected. Only confirmed followers have access to [account name]’s Tweets and complete profile. Click the “Follow” button to send a follow request.”

“No page” – the URL shows a notification“Sorry, that page doesn’t exist!”. This could be the result of a user removing the content on his / her own initiative or at Twitter’s request.

“Blocked” – The tweet is not accessible and the page shows a notification “This Tweet from [account name] has been withheld in Russia in response to a legal demand. Learn more .

“Available” – the URL is accessible, with no restriction.

Thirty-seven demands from Roskomnadzor were related to content that the agency described as “materials with pornographic images of minors and (or) announcements about the involvement of minors as performers for participating in entertainment events of a pornographic nature.” As a result, Twitter locally blocked 23 of those tweets but kept them up for the rest of the world. Here, the overlap between Russian law and Twitter’s policies is not as evident as in “suicidal” category: what Roskomnadzor considers to be “pornographic images of minors,”[9] Twitter likely views as “permissible adult content”, which does not violate its policies.[10]

Similarly, “extremist” (3), “gambling” (5) and “hatred” (1) tweets were blocked locally in Russia only. But “calls for public protests” tweets (4) were not “actioned” and still remain available everywhere.[11]

Legal demands from Roskomnadzor also resulted in the removal of content for violation of Twitter’s Terms of Service (“Accounts (TOS)”) related to 224 accounts. In other words, instead of handling those demands as removals for violation of local law, Twitter’s moderators themselves changed their categorization and handled them as reports of violation of Twitter’s content policies. There is nothing particularly wrong about this approach if the bad content would end up removed in any case. But this raises the important question of whether Twitter processed the government requests with the same level of priority as it does for all other users who flag bad content.

How Twitter restricts content geographically

Back in 2012, in order to be able to avoid global removals of content based on local demands from countries “that have different ideas about the contours of freedom of expression,” the company introduced a granular approach to filtering that was meant to enable it to withhold content from users in a specific country while keeping it available in the rest of the world.[12]

The details of how Twitter would identify the location of users were somewhat opaque. To judge by an article in the Help section called  “How to change your country settings,” Twitter may first set a country based on a number of signals, including user’s IP address, but a user can override that setting at any time by choosing another country in his/her profile.[13]

Our tests demonstrated that Twitter withholds content in Russia regardless of a user’s country settings. Russia-based users (likely determined based on their web IP addresses), logged-in or logged-out, won’t have access to locally withheld content regardless of their country setting. Only for users based outside of Russia will the country be determined based on which country the user selected in the profile settings. The only way for Russian users to have access to controversial materials on Twitter is by using VPNs or hiding their IP addresses otherwise.

Does Twitter conduct due diligence on government requests?

The answer to this question is unknown. According to the 2018 Corporate Accountability Index, Twitter does not disclose whether it carries out due diligence on government requests for content removal before deciding how to respond. The fact that it only complied with 51% of the requests suggests that the other 49% were reviewed by the company’s legal team and pushed back on.[14] Even if the company does conduct such due diligence, we identified several shortcomings, which suggest, that at least with respect to the Russian requests, such efforts were limited.

  • Take, for example, this notice (scroll down to page 3 for the English text). Roskomnadzor referred to an underlying decision of the Federal service on customers’ rights protection and human well-being surveillance,[15] dated 24.11.2017, 32638. But no such decision was attached to the request. More so, Roskomnadzor stated that the identified suicidal content, located at URL https://twitter.com/Aryuuki_/status/929417121020481536, had been added to the Unified Register under the reference number 337640-??. We checked both official and unofficial copies of the Unified Register and could not locate the reference number or the URL.[16]
  • The authors of the report decided to give only one example of the type of content Twitter withheld in Russia:

“Twitter withheld one Tweet related to a far right organization, Right Sector. In 2014, the Supreme Court of the Russian Federation ruled that this organization was a terrorist organization. More information can be found on Lumen here.”

Unfortunately, several things in this example are either not accurate or provide an incomplete picture:

  • In 2014, the Supreme Court of the Russian Federation ruled that Right Sector was an extremist, not a terrorist, organization. “Extremism” is a very loose, broad term under the Russian law and often used to prosecute political opponents and religious organizations (the Jehovah’s Witnesses, for example, have been found to be an extremist organization too).
  • The court decision which Roskomnadzor referred to in its request found a piece of music, not a tweet, to be illegal.
  • During the second half of 2017, Twitter withheld not just one tweet related to Right Sector, but it also blocked the organization’s whole account (based on the demand from Roskomnadzor dated October 27, 2017).

Conclusion and recommendations

Governments continue to adopt social media for the provision of complementary information dissemination, participation channels, and communication channels in which citizens can access government officials and government and render informed decisions. Indeed, the use of social media by the government is positively and significantly linked with perceptions of government transparency, which further enhances trust in the government. However, rather than using social media to enhance transparency, it appears that the government seeks to control information sharing and freedom of speech in covert ways, which the transparency reports of companies such as Twitter elucidate.

In our study, we found that in the second half of 2017 most of the content removal requests that Twitter received from the Russian government and withheld locally were related to suicidal content. When the company blocked content locally, it did so based on a user’s geolocation, likely determined by IP address, and kept it available for the rest of the world.

Our analysis has several limitations due to the incompleteness of the data that Twitter makes available. It uploaded to Lumen only some of the demands it received from Roskomnadzor, including both the demands where Twitter ended up withholding content locally and those where it kept the content up.

We recommend that the company focus on two areas in order to meet (or even exceed) its own standards:

  • Conduct a thorough due diligence process on each removal request rather than relying on Roskomnadzor’s assurances. Each request from Roskomnadzor should be accompanied by an underlying decision issued by one of the designated agencies or a court that found a particular piece of content illegal.
  • Publish all government requests. These should also include the requests that Twitter contested or processed as reports of terms of service violations. This is the best way to make transparency reports more meaningful, to engage civil society and the academic community in keeping the company accountable to its own high standards and to avoid any suspicion of a behind-the-scene pact between Twitter and Russian state censors, or any other sensors for that matter.

As a recent face-off between Telegram and Roskomnadzor demonstrates, maximum transparency in dealings with local governments only helps, not hurts, global platforms in winning over local users and, more importantly, in keeping local autocrats accountable – an aspiring vision that Twitter shared with us six years ago.

References

Banaji, S. (2008). The trouble with civic: A snapshot of young people’s civic and political engagements in twenty-first-century democracies. Journal of Youth Studies, 11(5), 543-560. doi:10.1080/1367620802283008

Baumgartner, J.C., Mackay, J.B., Morris, J.S., Otenyo, E.E., Powell, L., Smith, M.M. …Waite, B.C. (2010). Communicator-in-chief: How Barack Obama used new media technology to win the White House Edited by John Allen Hendricks, and Robert E. Denton Jr. Lexington Books.

Bertot, J.C., Jaeger, P.T., & Grimes, J.M. (2010). Using ICTs to create a culture of transparency: E-government and social media as openness and anti-corruption tools for societies. Government Information Quarterly, 27 (3), 264-271.

Bryer, T. (2010). Across the great divide: Social media and networking for citizen engagement. In J. H. Svara & J. Denhardt (Eds.), The connected community: Local governments as partners in citizen engagement and community building, (pp. 73-79). Retrieved from http://www.tlgconference.org/communityconnectionswhitepaper.pdf

Electronic Frontier Foundation. (2014). The Electronic Frontier Foundation’s Fourth Annual Report on Online Service Providers’ Privacy and Transparency Practices Regarding Government Access to User Data. Wall Street Journal. Retrieved from http://online.wsj.com/public/resources/documents/eff.pdf

Herbst, A. E., & Lide, C. (2010, October). Social media: The good, the bad and the ugly for municipalities. Paper presented at the National Municipal Lawyers Association, 75th annual conference, New Orleans, LA.

Heverin, T. & Zach, L. (2010). Twitter for city police department information sharing. Proceedings of the American Society for Information Science and Technology, 47(1), 1-7.

Institute for Local Government (2010). Social media and public agencies: Legal issues to be aware of. Retrieved from http://www.ca-ilg.org/sites/main/files/fileattachments/resources__Social_Media_Paper_6-24-10_Rev.pdf

Kang, S., & Gearhart, S. (2010). E-government and civic engagement: How is citizens’ use of city web sites related with civic involvement and political behaviors? Journal of Broadcasting & Electronic Media, 54(3), 443-462. doi:10.1080/08838151.2010.498847

Leetaru, K. (2018). Without transparency, democracy dies in the darkness of social media. Forbes. Retrieved from https://www.forbes.com/sites/kalevleetaru/2018/01/25/without-transparency-democracy-dies-in-the-darkness-of-social-media/

Llanso, E. (2017). Twitter transparency report shines a light on variety of ways governments seek to restrict speech online. CDT. Retrieved from https://cdt.org/blog/twitter-transparency-report-shines-a-light-on-variety-of-ways-governments-seek-to-restrict-speech-online/

Lorenzi, D. et al. (2014). Utilizing social media to improve local government responsiveness Proceedings of the 15th annual international conference on digital government research, ACM, pp. 236-244.

Mergel, I. (2013). Social media in the public sector: A guide to participation, collaboration and transparency in the networked world. San Francisco, CA: Jossey-Bass

Mossberger, K., Wu, Y., & Crawford, J. (2013). Connecting citizens and local governments? Social media and interactivity in major U.S. cities. Government Information Quarterly, 30(2013), 351-358.

Nabatchi, T., & Mergel, I. (2010). Participation 2.0: Using internet and social media: Technologies to promote distributed democracy and create digital neighborhoods. In J.H. Svara & J. Denhardt (Eds.), The connected community: Local governments as partners in citizen engagement and community building (pp. 80-87). Retrieved from http://www.tlgconference.org/communityconnectionswhitepaper.pdf

O’Connor, B., Balasubramanyan, R., Routledge, B.R., & Smith, N.A. (2010). From tweets to polls: Linking text sentiment to public opinion time series. ICWSM, 11 (122–129), 1-2.

Romero, D.M., Meeder, B., Kleinberg, J. (2011). Differences in the mechanics of information diffusion across topics: Idioms, political hashtags, and complex contagion on twitter. Proceedings of the 20th international conference on world wide web, ACM, pp. 695-704.

Schorr, J., & Stevens, A. (2011). Local democracy strengthened by new social media platforms. City Mayor’s Marketing. Retrieved from http://www.citymayors.com/marketing/social-media-cities.html

Song, C., & Lee, J. (2015). Citizens’ use of social media in government, perceived transparency, and trust in government. Public Performance and Management Review, 39(2), 430-453.

Soon Ae, C., & Warner, J. (2010). Finding information in an era of abundance: Towards a collaborative tagging environment in government. Information Polity: The International Journal of Government & Democracy in the Information Age, 15(1/2), 89-103. doi:10.3233/IP-2010-0201

[1] Full name of the law is Federal Law #149-FZ “On Information, Information Technology and Information Protection.”

[2] There are some variations of this procedure. For example, the process is more expedient for “extremist” content and “calls to public protests” which can be blocked immediately based on the decision of the General Prosecutor or his deputies, without forwarding a request to a website owner or a hosting provider. For an in-depth overview of the blocking procedures and insights into the operations of Roskomnadzor, see “This is how Russian Internet censorship works: A journey into the belly of the beast that is the Kremlin’s media watchdog”, Meduza, 13 Aug. 2015, https://meduza.io/en/feature/2015/08/13/this-is-how-russian-internet-censorship-works.

[3] See the definition of “Percentage of reports actioned” at https://transparency.twitter.com/en/removal-requests.html

[4] For example, there were 4 requests for removal of content described by Roskomnadzor as “calls for public protests”, which still remain accessible to everyone. See, e.g., https://www.lumendatabase.org/notices/15475385, page 6)

[5] We left the duplicates with different dates (assuming that Roskomnadzor sought to remove the same content more than once).

[6] See, e.g., https://www.lumendatabase.org/notices/15461064

[7] See, e.g., https://www.lumendatabase.org/notices/14937449

[8] Compare, for example, a tweet by @Aryuuki_ “In fact, I think that suicide in the form of a jump from a high-rise is very cool <…>” (not blocked, still available in Russia) and a tweet by @I_am_Pr_Unicorn “With every day I’m more convinced: suicide is a way out” (blocked in Russia)

[9] All of these tweets are Hentai art/anime (See https://en.wikipedia.org/wiki/Anime)

[10] “Twitter media policy – Twitter Help Center.” https://help.twitter.com/en/rules-and-policies/media-policy.

[11] See, e.g., a tweet calling to “join the march of millions on 12/15/17” https://twitter.com/5nov2017/status/930516874118488064

[12] https://blog.twitter.com/official/en_us/a/2012/tweets-still-must-flow.html

[13] “How to change your country settings – Twitter Help Center.” https://help.twitter.com/en/managing-your-account/how-to-change-country-settings.

[14] Although no Twitter URLs appear in the Unified Register, which signals that Roskomnadzor, for some reason, has been satisfied with the Twitter’s removal practices.

[15] An administrative body authorized to rule on suicidal content, also known as Rospotrebnadzor

[16] We used Roskomnadzor database (http://blocklist.rkn.gov.ru/) and an unofficial copy of it published by Russian NGO Roscomsvoboda (https://reestr.rublacklist.net/).

Time is precious

Time is precious

don’t waste it!

Get instant essay
writing help!
Get instant essay writing help!
Plagiarism-free guarantee

Plagiarism-free
guarantee

Privacy guarantee

Privacy
guarantee

Secure checkout

Secure
checkout

Money back guarantee

Money back
guarantee

Related Research Paper Samples & Examples

The Risk of Teenagers Smoking, Research Paper Example

Introduction Smoking is a significant public health concern in the United States, with millions of people affected by the harmful effects of tobacco use. Although, [...]

Pages: 11

Words: 3102

Research Paper

Impacts on Patients and Healthcare Workers in Canada, Research Paper Example

Introduction SDOH refers to an individual’s health and finances. These include social and economic status, schooling, career prospects, housing, health care, and the physical and [...]

Pages: 7

Words: 1839

Research Paper

Death by Neurological Criteria, Research Paper Example

Ethical Dilemmas in Brain Death Brain death versus actual death- where do we draw the line? The end-of-life issue reflects the complicated ethical considerations in [...]

Pages: 7

Words: 2028

Research Paper

Ethical Considerations in End-Of-Life Care, Research Paper Example

Ethical Dilemmas in Brain Death Ethical dilemmas often arise in the treatments involving children on whether to administer certain medications or to withdraw some treatments. [...]

Pages: 5

Words: 1391

Research Paper

Ethical Dilemmas in Brain Death, Research Paper Example

Brain death versus actual death- where do we draw the line? The end-of-life issue reflects the complicated ethical considerations in healthcare and emphasizes the need [...]

Pages: 7

Words: 2005

Research Paper

Politics of Difference and the Case of School Uniforms, Research Paper Example

Introduction In Samantha Deane’s article “Dressing Diversity: Politics of Difference and the Case of School Uniforms” and the Los Angeles Unified School District’s policy on [...]

Pages: 2

Words: 631

Research Paper

The Risk of Teenagers Smoking, Research Paper Example

Introduction Smoking is a significant public health concern in the United States, with millions of people affected by the harmful effects of tobacco use. Although, [...]

Pages: 11

Words: 3102

Research Paper

Impacts on Patients and Healthcare Workers in Canada, Research Paper Example

Introduction SDOH refers to an individual’s health and finances. These include social and economic status, schooling, career prospects, housing, health care, and the physical and [...]

Pages: 7

Words: 1839

Research Paper

Death by Neurological Criteria, Research Paper Example

Ethical Dilemmas in Brain Death Brain death versus actual death- where do we draw the line? The end-of-life issue reflects the complicated ethical considerations in [...]

Pages: 7

Words: 2028

Research Paper

Ethical Considerations in End-Of-Life Care, Research Paper Example

Ethical Dilemmas in Brain Death Ethical dilemmas often arise in the treatments involving children on whether to administer certain medications or to withdraw some treatments. [...]

Pages: 5

Words: 1391

Research Paper

Ethical Dilemmas in Brain Death, Research Paper Example

Brain death versus actual death- where do we draw the line? The end-of-life issue reflects the complicated ethical considerations in healthcare and emphasizes the need [...]

Pages: 7

Words: 2005

Research Paper

Politics of Difference and the Case of School Uniforms, Research Paper Example

Introduction In Samantha Deane’s article “Dressing Diversity: Politics of Difference and the Case of School Uniforms” and the Los Angeles Unified School District’s policy on [...]

Pages: 2

Words: 631

Research Paper