Industry Efforts to Censor Pro-Terrorism Online Content Pose Risks to Free Speech



  • In recent months, social media platforms—under pressure from a number of governments—have adopted new policies and practices to remove content that promotes terrorism. As the Guardian reported, these policies are typically carried out by low-paid contractors (or, in the case of YouTube, volunteers) and with little to no transparency and accountability. While the motivations of these companies might be sincere, such private censorship poses a risk to the free expression of Internet users.

    As groups like the Islamic State have gained traction online, Internet intermediaries have come under pressure from governments and other actors, including the following:

    • the Obama Administration;
    • the U.S. Congress in the form of legislative proposals that would require Internet companies to report “terrorist activity” to the U.S. government;
    • the European Union in the form of a “code of conduct” requiring Internet companies to take down terrorist propaganda within 24 hours of being notified, and via the EU Internet Forum;
    • individual European countries such as the U.K., France and Germany that have proposed exorbitant fines for Internet companies that fail to take down pro-terrorism content; and,
    • victims of terrorism who seek to hold social media companies civilly liable in U.S. courts for providing “material support” to terrorists by simply providing online platforms for global communication.

    One of the coordinated industry efforts against pro-terrorism online content is the development of a shared database of “hashes of the most extreme and egregious terrorist images and videos” that the companies have removed from their services. The companies that started this effort—Facebook, Microsoft, Twitter, and Google/YouTube—explained that the idea is that by sharing “digital fingerprints” of terrorist images and videos, other companies can quickly “use those hashes to identify such content on their services, review against their respective policies and definitions, and remove matching content as appropriate.”

    As a second effort, the same companies created the Global Internet Forum to Counter Terrorism, which will help the companies “continue to make our hosted consumer services hostile to terrorists and violent extremists.” Specifically, the Forum “will formalize and structure existing and future areas of collaboration between our companies and foster cooperation with smaller tech companies, civil society groups and academics, governments and supra-national bodies such as the EU and the UN.” The Forum will focus on technological solutions; research; and knowledge-sharing, which will include engaging with smaller technology companies, developing best practices to deal with pro-terrorism content, and promoting counter-speech against terrorism.

    Internet companies are also taking individual measures to combat pro-terrorism content. Google announced several new efforts, while both Google and Facebook have committed to using artificial intelligence technology to find pro-terrorism content for removal.

    Private censorship must be cautiously deployed

    While Internet companies have a First Amendment right to moderate their platforms as they see fit, private censorship—or what we sometimes call shadow regulation—can be just as detrimental to users’ freedom of expression as governmental regulation of speech. As social media companies increase their moderation of online content, they must do so as cautiously as possible.

    Through our project Onlinecensorship.org, we monitor private censorship and advocate for companies to be more transparent and accountable to their users. We solicit reports from users of when Internet companies have removed specific posts or other content, or whole accounts.

    We consistently urge companies to follow basic guidelines to mitigate the impact on users’ free speech. Specifically, companies should have narrowly tailored, clear, fair, and transparent content policies (i.e., terms of service or “community guidelines”); they should engage in consistent and fair enforcement of those policies; and they should have robust appeals processes to minimize the impact on users’ freedom of expression.

    Over the years, we’ve found that companies’ efforts to moderate online content almost always result in overbroad content takedowns or account deactivations. We, therefore, are justifiably skeptical that the latest efforts by Internet companies to combat pro-terrorism content will meet our basic guidelines.

    A central problem for these global platforms is that such private censorship can be counterproductive. Users who engage in counter-speech against terrorism often find themselves on the wrong side of the rules if, for example, their post includes an image of one of more than 600 “terrorist leaders” designated by Facebook. In one instance, a journalist from the United Arab Emirates was temporarily banned from the platform for posting a photograph of Hezbollah leader Hassan Nasrallah with a LGBTQ pride flag overlaid on it—a clear case of parody counter-speech that Facebook’s content moderators failed to grasp.

    A more fundamental problem is that having narrow definitions is difficult. What counts as speech that “promotes” terrorism? What even counts as “terrorism”? These U.S.-based companies may look to the State Department’s list of designated terrorist organizations as a starting point. But Internet companies will sometimes go further. Facebook, for example, deactivated the personal accounts of Palestinian journalists; it did the same thing for Chechen independence activists under the guise that they were involved in “terrorist activity.” These examples demonstrate the challenges social media companies face in fairly applying their own policies.

    A recent investigative report by ProPublica revealed how Facebook’s content rules can lead to seemingly inconsistent takedowns. The authors wrote: “[T]he documents suggest that, at least in some instances, the company’s hate-speech rules tend to favor elites and governments over grassroots activists and racial minorities. In so doing, they serve the business interests of the global company, which relies on national governments not to block its service to their citizens.” The report emphasized the need for companies to be more transparent about their content rules, and to have rules that are fair for all users around the world.

    Artificial intelligence poses special concerns

    We are concerned about the use of artificial intelligence automation to combat pro-terrorism content because of the imprecision inherent in systems that automatically block or remove content based on an algorithm. Facebook has perhaps been the most aggressive in deploying AI in the form of machine learning technology in this context. The company’s latest AI efforts include using image matching to detect previously tagged content, using natural language processing techniques to detect posts advocating for terrorism, removing terrorist clusters, removing new fake accounts created by repeat offenders, and enforcing its rules across other Facebook properties such as WhatsApp and Instagram.

    This imprecision exists because it is difficult for humans and machines alike to understand the context of a post. While it’s true that computers are better at some tasks than people, understanding context in written and image-based communication is not one of those tasks. While AI algorithms can understand very simple reading comprehension problems, they still struggle with even basic tasks such as capturing meaning in children’s books. And while it’s possible that future improvements to machine learning algorithms will give AI these capabilities, we’re not there yet.

    Google’s Content ID, for example, which was designed to address copyright infringement, has also blocked fair uses, news reporting, and even posts by copyright owners themselves. If automatic takedowns based on copyright are difficult to get right, how can we expect new algorithms to know the difference between a terrorist video clip that’s part of a satire and one that’s genuinely advocating violence?

    Until companies can publicly demonstrate that their machine learning algorithms can accurately and reliably determine whether a post is satire, commentary, news reporting, or counter-speech, they should refrain from censoring their users by way of this AI technology.

    Even if a company were to have an algorithm for detecting pro-terrorism content that was accurate, reliable, and had a minimal percentage of false positives, AI automation would still be problematic because machine learning systems are not robust to distributional change. Once machine learning algorithms are trained, they are as brittle as any other algorithm, and building and training machine learning algorithms for a complex task is an expensive, time-intensive process. Yet the world that algorithms are working in is constantly evolving and soon won’t look like the world in which the algorithms were trained.

    This might happen in the context of pro-terrorism content on social media: once terrorists realize that algorithms are identifying their content, they will start to game the system by hiding their content or altering it so that the AI no longer recognizes it (by leaving out key words, say, or changing their sentence structure, or a myriad of other ways—it depends on the specific algorithm). This problem could also go the other way: a change in culture or how some group of people express themselves could cause an algorithm to start tagging their posts as pro-terrorism content, even though they’re not (for example, if people co-opted a slogan previously used by terrorists in order to de-legitimize the terrorist group).

    We strongly caution companies (and governments) against assuming that technology will be the panacea in identifying pro-terrorism content, because this technology simply doesn’t yet exist.

    Is taking down pro-terrorism content actually a good idea?

    Apart from the free speech and artificial intelligence concerns, there is an open question of efficacy. The sociological assumption is that removing pro-terrorism content will reduce terrorist recruitment and community sympathy for those who engage in terrorism. In other words, the question is not whether terrorists are using the Internet to recruit new operatives—the question is whether taking down pro-terrorism content and accounts will meaningfully contribute to the fight against global terrorism.

    Governments have not sufficiently demonstrated this to be the case. And some experts believe this absolutely not to be the case. For example, Michael German, a former FBI agent with counter-terrorism experience and current fellow at the Brennan Center for Justice, said, “Censorship has never been an effective method of achieving security, and shuttering websites and suppressing online content will be as unhelpful as smashing printing presses.” In fact, as we’ve argued before, censoring the content and accounts of determined groups could be counterproductive and actually result in pro-terrorism content being publicized more widely (a phenomenon known as the Streisand Effect).

    Additionally, permitting terrorist accounts to exist and allowing pro-terrorism content to remain online, including that which is publicly available, may actually be beneficial by providing opportunities for ongoing engagement with these groups. For example, a Kenyan government official stated that shutting down an Al Shabaab Twitter account would be a bad idea: “Al Shabaab needs to be engaged positively and [T]witter is the only avenue.”

    Keeping pro-terrorism content online also contributes to journalism, open source intelligence gathering, academic research, and generally the global community’s understanding of this tragic and complex social phenomenon. On intelligence gathering, the United Nations has said that “increased Internet use for terrorist purposes provides a corresponding increase in the availability of electronic data which may be compiled and analysed for counter-terrorism purposes.”

    In conclusion

    While we recognize that Internet companies have a right to police their own platforms, we also recognize that such private censorship is often in response to government pressure, which is often not legitimately wielded.

    Governments often get private companies to do what they can’t do themselves. In the U.S., for example, pro-terrorism content falls within the protection of the First Amendment. Other countries, many of which do not have similarly robust constitutional protections, might nevertheless find it politically difficult to pass speech-restricting laws.

    Ultimately, we are concerned about the serious harm that sweeping censorship regimes—even by private actors—can have on users, and society at large. Internet companies must be accountable to their users as they deploy policies that restrict content.

    First, they should make their content policies narrowly tailored, clear, fair, and transparent to all—as the Guardian’s Facebook Files demonstrate, some companies have a long way to go.

    Second, companies should engage in consistent and fair enforcement of those policies.

    Third, companies should ensure that all users have access to a robust appeals process—content moderators are bound to make mistakes, and users must be able to seek justice when that happens.

    Fourth, until artificial intelligence systems can be proven accurate, reliable and adaptable, companies should not deploy this technology to censor their users’ content.

    Finally, we urge those companies that are subject to increasing governmental demands for backdoor censorship regimes to improve their annual transparency reporting to include statistics on takedown requests related to the enforcement of their content policies.

    https://www.eff.org/deeplinks/2017/07/industry-efforts-censor-pro-terrorism-online-content-pose-risks-free-speech





Tmux Commands

screen and tmux

A comparison of the features (or more-so just a table of notes for accessing some of those features) for GNU screen and BSD-licensed tmux.

The formatting here is simple enough to understand (I would hope). ^ means ctrl+, so ^x is ctrl+x. M- means meta (generally left-alt or escape)+, so M-x is left-alt+x

It should be noted that this is no where near a full feature-set of either group. This - being a cheat-sheet - is just to point out the most very basic features to get you on the road.

Trust the developers and manpage writers more than me. This document is originally from 2009 when tmux was still new - since then both of these programs have had many updates and features added (not all of which have been dutifully noted here).

Action tmux screen
start a new session tmux OR
tmux new OR
tmux new-session
screen
re-attach a detached session tmux attach OR
tmux attach-session
screen-r
re-attach an attached session (detaching it from elsewhere) tmux attach -d OR
tmux attach-session -d
screen -dr
re-attach an attached session (keeping it attached elsewhere) tmux attach OR
tmux attach-session
screen -x
detach from currently attached session ^b d OR
^b :detach
^a ^d OR
^a :detach
rename-window to newname ^b , <newname> OR
^b :rename-window <newn>
^a A <newname>
list windows ^b w ^a w
list windows in chooseable menu ^a "
go to window # ^b # ^a #
go to last-active window ^b l ^a ^a
go to next window ^b n ^a n
go to previous window ^b p ^a p
see keybindings ^b ? ^a ?
list sessions ^b s OR
tmux ls OR
tmux list-sessions
screen -ls
toggle visual bell ^a ^g
create another window ^b c ^a c
exit current shell/window ^d ^d
split window/pane horizontally ^b " ^a S
split window/pane vertically ^b % ^a |
switch to other pane ^b o ^a <tab>
kill the current pane ^b x OR (logout/^D)
collapse the current pane/split (but leave processes running) ^a X
cycle location of panes ^b ^o
swap current pane with previous ^b {
swap current pane with next ^b }
show time ^b t
show numeric values of panes ^b q
toggle zoom-state of current pane (maximize/return current pane) ^b z
break the current pane out of its window (to form new window) ^b !
re-arrange current panels within same window (different layouts) ^b [space]
Kill the current window (and all panes within) ^b killw [target-window]
  • Criteo is an ad company. You may not have heard of them, but they do retargeting, the type of ads that pursue users across the web, beseeching them to purchase a product they once viewed or have already bought. To identify users across websites, Criteo relies on cross-site tracking using cookies and other methods to follow users as they browse. This has led them to try and circumvent the privacy features in Apple’s Safari browser which protects its users from such tracking. Despite this apparently antagonistic attitude towards user privacy, Criteo has also been whitelisted by the Acceptable Ads initiative. This means that their ads are unblocked by popular adblockers such as Adblock and Adblock Plus. Criteo pays Eyeo, the operator of Acceptable Ads, for this whitelisting and must comply with their format requirements. But this also means they can track any user of these adblockers who has not disabled Acceptable Ads, even if they have installed privacy tools such as EasyPrivacy with the intention of protecting themselves. EFF is concerned about Criteo’s continued anti-privacy actions and their continued inclusion in Acceptable Ads.

    Safari Shuts out Third Party Cookies…

    All popular browsers give users control over who gets to set cookies, but Safari is the only one that blocks third-party cookies (those set by a domain other than the site you are visiting) by default. (Safari’s choice is important because only 5-10% of users ever change default settings in software.) Criteo relies on third-party cookies. Since users have little reason to visit Criteo’s own website, the company gets its cookies onto users’ machines through its integration on many online retail websites. Safari’s cookie blocking is a major problem for Criteo, especially given the large and lucrative nature of iPhone’s user base. Rather than accept this, Criteo has repeatedly implemented ways to defeat Safari’s privacy protections.

    One workaround researchers detected Criteo using was to redirect users from sites where their service was present to their own. For example, if you visited wintercoats.com and clicked on a product category, you would be first diverted to criteo.com and then redirected to wintercoats.com/down-filled. Although imperceptible to the user, this detour was enough to persuade the browser that criteo.com is a site you chose to visit, and therefore a first party entitled to set a cookie rather than a third party. Criteo applied for a patent on this method in August 2013.

    …And Closes the Backdoor

    Last summer, however, Apple unveiled a new version of Safari with more sophisticated cookie handling—called Intelligent Tracking Prevention (ITP)—which killed off the redirect technique as a means to circumvent the cookie controls. The browser now analyzes if the user has engaged with a website in a meaningful way before allowing it to set a cookie. The announcement triggered panic among advertising companies, whose trade association, the Interactive Advertising Bureau, denounced the feature and rushed out technical recommendations to work around it. Obviously the level of user “interaction” with Criteo during the redirect described above fails ITP’s test, which meant Criteo was locked out again.

    It appears that Criteo’s response was to abandon cookies for Safari users and to generate a persistent identifier by piggybacking on a key user safety technology called HSTS. When a browser connects to a site via HTTPS (i.e. a site that supports encryption), the site can respond with an HTTP Strict Transport Security policy (HSTS), instructing the browser to only contact it using HTTPS. Without a HSTS policy, your browser might try to connect to the site over regular old unencrypted HTTP in the future—and thus be vulnerable to a downgrade attack. Criteo used HSTS to sneak data into the browser cache to produce an identifier it could use to recognize the individual’s browser and profile them. This approach relied on the fact that it is difficult to clear HSTS data in Safari, requiring the user to purge the cache entirely to delete the identifier. For EFF, it is especially worrisome that Criteo used a technique that pits privacy protection against user security interests by targeting HSTS. Use of this mechanism was documented by Gotham City Research, an investment firm who have bet against Criteo’s stock.

    In early December, Apple released an update to iOS and Safari which disabled Criteo’s ability to exploit HSTS. This led to Criteo revising down their revenue forecasts and a sharp fall in their share price.

    How is Criteo Acceptable Advertising”****?

    "… w__e sort of seek the consent of users, just like we had done before_."__1_ - Erich Eichmann, CEO Criteo

    _"Only users who don’t already have a Criteo identifier will see the header or footer, and it is displayed only once per device. Thanks to [the?] Criteo advertisers network, most of your users would have already accepted our services on the website of another of our partner. On average, only 5% of your users will see the headers or footers, and for those who do, the typical opt-out rate is less than .2%._" - Criteo Support Center

    Criteo styles itself as a leader in privacy practices, yet they have dedicated significant engineering resources to circumventing privacy tools. They claim to have obtained user consent to tracking based on a minimal warning delivered in what we believe to be a highly confusing context. When a user first visits a site containing Criteo’s script, they received a small notice stating, _"_Click any link to use Criteo’s cross-site tracking technology." If the user continues to use the site, they are deemed to have consented. Little wonder that Criteo can boast of a low opt-out rate to their clients.

    Due to their observed behaviour prior to the ITP episode, Criteo’s incorporation into the Acceptable Ads in December 2015 aroused criticism among users of ad blockers. We have written elsewhere about how Acceptable Ads creates a clash of interests between adblocking companies and their users, especially those concerned with their privacy. But Criteo’s participation in Acceptable Ads brings into focus the substantive problem with the program itself. The criteria for Acceptable Ads are concerned chiefly with format and aesthetic aspects (e.g. How big is the ad? How visually intrusive? Does it blink?) and excludes privacy concerns. Retargeting is unpopular and mocked by users, in part because it wears its creepy tracking practices on its sleeve. Our view is that Criteo’s bad behavior should exclude its products from being deemed “acceptable” in any way.

    The fact that the Acceptable Ads Initiative has approved Criteo’s user-tracking-by-misusing-security-features ads is indicative of the privacy problems we believe to be at the heart of the Acceptable Ads program. In March this year, Eyeo announced an Acceptable Ads Committee that will control the criteria for Acceptable Ads in the future. The Committee should start by instituting a rule which excludes companies that circumvent explicit privacy tools or exploit user security technologies for the purpose of tracking.

    1. http://criteo.investorroom.com/download/Transcript_Q3+2017+Earnings_EDITED.pdf

    https://www.eff.org/deeplinks/2017/12/arms-race-against-trackers-safari-leads-criteo-30

    read more
  • Have you ever sent a motivational text to a friend? If you have, perhaps you tailored your message to an activity or location by saying “Good luck in the race!” or “Have fun in New York!” Now, imagine doing this automatically with a compuuuter. What a great invention. Actually, no. That’s not a good invention, it’s our latest Stupid Patent of the Month.

    U.S. Patent No. 9,069,648 is titled “Systems and methods for delivering activity based suggestive (ABS) messages.” The patent describes sending “motivational messages,” based “on the current or anticipated activity of the user,” to a “personal electronic device.” The patent provides examples such as sending the message “don’t give up” when the user is running up a hill. The examples aren’t limited to health or exercise. For example, the patent suggests sending messages like “do not fear” and “God is with you” when a “user enters a dangerous neighborhood.”

    The patent’s description of its invention is filled with silly, non-standard acronyms like ABS for “activity based suggestive” messages or EBIF for “electronic based intelligence function.” These silly acronyms create an illusion of complexity where plain, descriptive language would reveal the mundane nature of the supposed invention. For example, what the patent grandly calls EBIF appears to be nothing more than standard computer processing.

    The ’648 patent is owned by Motivational Health Messaging LLC. While this may be a new company, at least one of the people behind it has been involved in massive patent trolling campaigns before. And the two named inventors have both been inventors on patents that trolls have asserted hundreds of times. One is also an inventor listed on patents asserted by infamous patent troll Shipping and Transit LLC. The other named inventor is the inventor on the patents asserted by Electronic Communication Technologies LLC. Those two entities (with their predecessors) brought over 700 lawsuits, many against very small businesses. In other words, the ’648 patent has been issued to Troll Co. at 1 Troll Street, Troll Town, Trollida USA.

    We believe that the claims of the ’648 patent are clearly invalid under the Supreme Court’s decision in Alice v. CLS Bank, which held abstract ideas do not become eligible for a patent merely because they are implemented in conventional computer technology. Indeed, the patent repeatedly emphasizes that the claimed methods are not tied to any particular hardware or software. For example, it states:

    The software and software logic described in this document … which comprises an ordered listing of executable instructions for implementing logical functions, can be embodied in any non-transitory computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions.

    The ’648 patent issued on June 30, 2015, a full year after the Supreme Court’s Alice ruling. Despite this, the patent examiner never even discussed the decision. If Alice is to mean anything at all, it has to be applied to an application like this one.

    In our view, if Motivational Health Messaging asserts its patent in court, any defendant that fought back should prevail under Alice. Indeed, we would hope that the court would strongly consider awarding attorney’s fees to the defendant in such a case. Shipping & Transit has now had two fee awards made against it for asserting patents that are clearly invalid under Alice. And the Federal Circuit recently held that fee awards can be appropriate when patent owners make objectively unreasonable argument concerning Alice.

    In addition to the problems under Alice, we believe the claims of the ’648 patent should have been rejected as obvious. When the application was filed in 2012, there was nothing new about sending motivational messages or automatically tailoring messages to things like location. In one proposed embodiment, the patent suggests that a “user walking to a hole may be delivered ABS messages, including reminders or instructions on how to play a particular hole.” But golf apps were already doing this. The Patent Office didn’t consider any real-world mobile phone applications when reviewing the application.

    If you want to look for prior art yourself, Unified Patents is running a crowdsourcing contest to find the best prior art to invalidate the ’648 patent. Aside from the warm feelings that come from fighting patent trolls, there is a $2000 prize pool.

    Despite the weakness of its patent, Motivational Health Messaging LLC might still send out demand letters. If you receive such a letter, you can contact EFF and we can help you find counsel.

    We have long complained that the Patent Office promotes patent trolling by granting obvious and/or abstract software patents. The history of the ’648 patent shows how the Patent Office’s failure to properly review applications leads to bad patents falling into the hands of trolls.

    read more
});