Stop SESTA: Whose Voices Will SESTA Silence?

  • Overreliance on Automated Filters Would Push Victims Off of the Internet

    In all of the debate about the Stop Enabling Sex Traffickers Act (SESTA, S. 1693), there’s one question that’s received surprisingly little airplay: under SESTA, what would online platforms do in order to protect themselves from the increased liability for their users’ speech?

    With the threat of overwhelming criminal and civil liability hanging over their heads, Internet platforms would likely turn to automated filtering of users’ speech in a big way. That’s bad news because when platforms rely too heavily on automated filtering, it almost always results in some voices being silenced. And the most marginalized voices in society can be the first to disappear.

    Take Action

    Tell Congress: Stop SESTA.

    Section 230 Built Internet Communities

    The modern Internet is a complex system of online intermediaries—web hosting providers, social media platforms, news websites that host comments—all of which we use to speak out and communicate with each other. Those platforms are all enabled by Section 230, a law that protects platforms from some types of liability for their users’ speech. Without those protections, most online intermediaries would not exist in their current form; the risk of liability would simply be too high.

    Section 230 still allows authorities to prosecute platforms that break federal criminal law, but it keeps platforms from being punished for their customers’ actions in federal civil court or at the state level. This careful balance gives online platforms the freedom to set and enforce their own community standards while still allowing the government to hold platforms accountable for criminal behavior.

    When platforms rely too heavily on automated filtering, it almost always results in some voices being silenced. And the most marginalized voices in society can be the first to disappear.

    SESTA would throw off that balance by shifting additional liability to intermediaries. Many online communities would have little choice but to mitigate that risk by investing heavily in policing their members’ speech.

    Or perhaps hire computers to police their members’ speech for them.

    The Trouble with Bots

    Massive cloud software company Oracle recently endorsed SESTA, but Oracle’s letter of support actually confirms one of the bill’s biggest problems—SESTA would effectively require Internet businesses to place more trust than ever before in automated filtering technologies to police their users’ activity.

    While automated filtering technologies have certainly improved since Section 230 passed in 1996, Oracle implies that bots can now filter out sex traffickers’ activity with near-perfect accuracy without causing any collateral damage.

    That’s simply not true. At best, automated filtering provides tools that can aid human moderators in finding content that may need further review. That review still requires human community managers. But many Internet companies (including most startups) would be unable to dedicate enough staff time to fully mitigate the risk of litigation under SESTA.

    So what will websites do if they don’t have enough human reviewers to match their growing user bases? It’s likely that they’ll tune their automated filters to err on the side of extreme caution—which means silencing legitimate voices. To see how that would happen, look at the recent controversy over Google’s PerspectiveAPI, a tool designed to measure the “toxicity” in online discussions. PerspectiveAPI flags statements like “I am a gay woman” or “I am a black man” as toxic because it fails to differentiate between Internet users talking about themselves and making statements about marginalized groups. It even flagged “I am a Jew” as more toxic than “I don’t like Jews.”

    See the problem? Now imagine a tool designed to filter out speech that advertises sex trafficking to comply with SESTA. From a technical perspective, creating such a tool that doesn’t also flag a victim of trafficking telling her story or trying to find help would be extremely difficult. (For that matter, so would training it to differentiate trafficking from consensual sex work.) If Google, the largest artificial intelligence (AI) company on the planet, can’t develop an algorithm that can reason about whether a simple statement is toxic, how likely is it that any company will be able to automatically and accurately detect sex trafficking advertisements?

    Despite all the progress we’ve made in analytics and AI since 1996, machines still have an incredibly difficult time understanding subtlety and context when it comes to human speech. Filtering algorithms can’t yet understand things like the motivation behind a post—a huge factor in detecting the difference between a post that actually advertises sex trafficking and a post that criticizes sex trafficking and provides support to victims.

    This is a classic example of the “nerd harder” problem, where policymakers believe that technology can advance to fit their specifications as soon as they pass a law requiring it to do so. They fail to recognize the inherent limits of automated filtering: bots are useful in some cases as an aid to human moderators, but they’ll never be appropriate as the unchecked gatekeeper to free expression. If we give them that position, then victims of sex trafficking may be the first people locked out.

    At the same time, it’s also extremely unlikely that filtering systems will actually be able to stop determined sex traffickers from posting. That’s because it’s not currently technologically possible to create an automated filtering system that can’t be fooled by a human. For example, say you have a filter that just looks for certain keywords or phrases. Sex traffickers will learn what words or phrases trigger the filter and avoid them by using other words in their place.

    New developments in AI research will likely make filters a more effective aid to human review, but when freedom of expression is at stake, they’ll never supplant human moderators.

    Building a more complicated filter—say, by using advanced machine learning or AI techniques—won’t solve the problem either. That’s because all complex machine learning systems are susceptible to what are known as “adversarial inputs”—examples of data that look normal to a human, but which completely fool AI-based classification systems. For example, an AI-based filtering system that recognizes sex trafficking posts might look at such a post and classify it correctly—unless the sex trafficker adds some random-looking-yet-carefully-chosen characters to the post (maybe even a block of carefully constructed incomprehensible text at the end), in which case the filtering system will classify the post as having nothing to do with sex trafficking.

    If you’ve ever seen a spam email with a block of nonsense text at the bottom, then you’ve seen this tactic in action. Some spammers add blocks of text from books or articles to the bottom of their spam emails in order to fool spam filters into thinking the emails are legitimate. Research on solving this problem is ongoing, but slow. New developments in AI research will likely make filters a more effective aid to human review, but when freedom of expression is at stake, they’ll never supplant human moderators.

    In other words, not only would automated filters be ineffective at removing sex trafficking ads from the Internet, they would also almost certainly end up silencing the very victims lawmakers are trying to help.

    Don’t Put Machines in Charge of Free Speech

    One irony of SESTA supporters’ praise for automated filtering is that Section 230 made algorithmic filtering possible. In 1995, a New York court ruled that because the online service Prodigy engaged in some editing of its members’ posts, it could be held liable as a “publisher” for the posts that it didn’t filter. When Reps. Christopher Cox and Ron Wyden introduced the Internet Freedom and Family Empowerment Act (the bill that would evolve into Section 230), they did it partially to remove that legal disincentive for online platforms to enforce community standards. Without Section 230, platforms would never have invested in improving filtering technologies.

    However, automated filters simply cannot be trusted as the final arbiters of online speech. At best, they’re useful as an aid to human moderators, enforcing standards that are transparent to the user community. And the platforms using them must carefully balance enforcing standards with respecting users’ right to express themselves. Laws must protect that balance by shielding platforms from liability for their customers’ actions. Otherwise, marginalized voices can be the first ones pushed off the Internet.

    Take Action

    Tell Congress: Stop SESTA.

Tmux Commands

screen and tmux

A comparison of the features (or more-so just a table of notes for accessing some of those features) for GNU screen and BSD-licensed tmux.

The formatting here is simple enough to understand (I would hope). ^ means ctrl+, so ^x is ctrl+x. M- means meta (generally left-alt or escape)+, so M-x is left-alt+x

It should be noted that this is no where near a full feature-set of either group. This - being a cheat-sheet - is just to point out the most very basic features to get you on the road.

Trust the developers and manpage writers more than me. This document is originally from 2009 when tmux was still new - since then both of these programs have had many updates and features added (not all of which have been dutifully noted here).

Action tmux screen
start a new session tmux OR
tmux new OR
tmux new-session
re-attach a detached session tmux attach OR
tmux attach-session
re-attach an attached session (detaching it from elsewhere) tmux attach -d OR
tmux attach-session -d
screen -dr
re-attach an attached session (keeping it attached elsewhere) tmux attach OR
tmux attach-session
screen -x
detach from currently attached session ^b d OR
^b :detach
^a ^d OR
^a :detach
rename-window to newname ^b , <newname> OR
^b :rename-window <newn>
^a A <newname>
list windows ^b w ^a w
list windows in chooseable menu ^a "
go to window # ^b # ^a #
go to last-active window ^b l ^a ^a
go to next window ^b n ^a n
go to previous window ^b p ^a p
see keybindings ^b ? ^a ?
list sessions ^b s OR
tmux ls OR
tmux list-sessions
screen -ls
toggle visual bell ^a ^g
create another window ^b c ^a c
exit current shell/window ^d ^d
split window/pane horizontally ^b " ^a S
split window/pane vertically ^b % ^a |
switch to other pane ^b o ^a <tab>
kill the current pane ^b x OR (logout/^D)
collapse the current pane/split (but leave processes running) ^a X
cycle location of panes ^b ^o
swap current pane with previous ^b {
swap current pane with next ^b }
show time ^b t
show numeric values of panes ^b q
toggle zoom-state of current pane (maximize/return current pane) ^b z
break the current pane out of its window (to form new window) ^b !
re-arrange current panels within same window (different layouts) ^b [space]
Kill the current window (and all panes within) ^b killw [target-window]
  • Criteo is an ad company. You may not have heard of them, but they do retargeting, the type of ads that pursue users across the web, beseeching them to purchase a product they once viewed or have already bought. To identify users across websites, Criteo relies on cross-site tracking using cookies and other methods to follow users as they browse. This has led them to try and circumvent the privacy features in Apple’s Safari browser which protects its users from such tracking. Despite this apparently antagonistic attitude towards user privacy, Criteo has also been whitelisted by the Acceptable Ads initiative. This means that their ads are unblocked by popular adblockers such as Adblock and Adblock Plus. Criteo pays Eyeo, the operator of Acceptable Ads, for this whitelisting and must comply with their format requirements. But this also means they can track any user of these adblockers who has not disabled Acceptable Ads, even if they have installed privacy tools such as EasyPrivacy with the intention of protecting themselves. EFF is concerned about Criteo’s continued anti-privacy actions and their continued inclusion in Acceptable Ads.

    Safari Shuts out Third Party Cookies…

    All popular browsers give users control over who gets to set cookies, but Safari is the only one that blocks third-party cookies (those set by a domain other than the site you are visiting) by default. (Safari’s choice is important because only 5-10% of users ever change default settings in software.) Criteo relies on third-party cookies. Since users have little reason to visit Criteo’s own website, the company gets its cookies onto users’ machines through its integration on many online retail websites. Safari’s cookie blocking is a major problem for Criteo, especially given the large and lucrative nature of iPhone’s user base. Rather than accept this, Criteo has repeatedly implemented ways to defeat Safari’s privacy protections.

    One workaround researchers detected Criteo using was to redirect users from sites where their service was present to their own. For example, if you visited and clicked on a product category, you would be first diverted to and then redirected to Although imperceptible to the user, this detour was enough to persuade the browser that is a site you chose to visit, and therefore a first party entitled to set a cookie rather than a third party. Criteo applied for a patent on this method in August 2013.

    …And Closes the Backdoor

    Last summer, however, Apple unveiled a new version of Safari with more sophisticated cookie handling—called Intelligent Tracking Prevention (ITP)—which killed off the redirect technique as a means to circumvent the cookie controls. The browser now analyzes if the user has engaged with a website in a meaningful way before allowing it to set a cookie. The announcement triggered panic among advertising companies, whose trade association, the Interactive Advertising Bureau, denounced the feature and rushed out technical recommendations to work around it. Obviously the level of user “interaction” with Criteo during the redirect described above fails ITP’s test, which meant Criteo was locked out again.

    It appears that Criteo’s response was to abandon cookies for Safari users and to generate a persistent identifier by piggybacking on a key user safety technology called HSTS. When a browser connects to a site via HTTPS (i.e. a site that supports encryption), the site can respond with an HTTP Strict Transport Security policy (HSTS), instructing the browser to only contact it using HTTPS. Without a HSTS policy, your browser might try to connect to the site over regular old unencrypted HTTP in the future—and thus be vulnerable to a downgrade attack. Criteo used HSTS to sneak data into the browser cache to produce an identifier it could use to recognize the individual’s browser and profile them. This approach relied on the fact that it is difficult to clear HSTS data in Safari, requiring the user to purge the cache entirely to delete the identifier. For EFF, it is especially worrisome that Criteo used a technique that pits privacy protection against user security interests by targeting HSTS. Use of this mechanism was documented by Gotham City Research, an investment firm who have bet against Criteo’s stock.

    In early December, Apple released an update to iOS and Safari which disabled Criteo’s ability to exploit HSTS. This led to Criteo revising down their revenue forecasts and a sharp fall in their share price.

    How is Criteo Acceptable Advertising”****?

    "… w__e sort of seek the consent of users, just like we had done before_."__1_ - Erich Eichmann, CEO Criteo

    _"Only users who don’t already have a Criteo identifier will see the header or footer, and it is displayed only once per device. Thanks to [the?] Criteo advertisers network, most of your users would have already accepted our services on the website of another of our partner. On average, only 5% of your users will see the headers or footers, and for those who do, the typical opt-out rate is less than .2%._" - Criteo Support Center

    Criteo styles itself as a leader in privacy practices, yet they have dedicated significant engineering resources to circumventing privacy tools. They claim to have obtained user consent to tracking based on a minimal warning delivered in what we believe to be a highly confusing context. When a user first visits a site containing Criteo’s script, they received a small notice stating, _"_Click any link to use Criteo’s cross-site tracking technology." If the user continues to use the site, they are deemed to have consented. Little wonder that Criteo can boast of a low opt-out rate to their clients.

    Due to their observed behaviour prior to the ITP episode, Criteo’s incorporation into the Acceptable Ads in December 2015 aroused criticism among users of ad blockers. We have written elsewhere about how Acceptable Ads creates a clash of interests between adblocking companies and their users, especially those concerned with their privacy. But Criteo’s participation in Acceptable Ads brings into focus the substantive problem with the program itself. The criteria for Acceptable Ads are concerned chiefly with format and aesthetic aspects (e.g. How big is the ad? How visually intrusive? Does it blink?) and excludes privacy concerns. Retargeting is unpopular and mocked by users, in part because it wears its creepy tracking practices on its sleeve. Our view is that Criteo’s bad behavior should exclude its products from being deemed “acceptable” in any way.

    The fact that the Acceptable Ads Initiative has approved Criteo’s user-tracking-by-misusing-security-features ads is indicative of the privacy problems we believe to be at the heart of the Acceptable Ads program. In March this year, Eyeo announced an Acceptable Ads Committee that will control the criteria for Acceptable Ads in the future. The Committee should start by instituting a rule which excludes companies that circumvent explicit privacy tools or exploit user security technologies for the purpose of tracking.


    read more
  • Have you ever sent a motivational text to a friend? If you have, perhaps you tailored your message to an activity or location by saying “Good luck in the race!” or “Have fun in New York!” Now, imagine doing this automatically with a compuuuter. What a great invention. Actually, no. That’s not a good invention, it’s our latest Stupid Patent of the Month.

    U.S. Patent No. 9,069,648 is titled “Systems and methods for delivering activity based suggestive (ABS) messages.” The patent describes sending “motivational messages,” based “on the current or anticipated activity of the user,” to a “personal electronic device.” The patent provides examples such as sending the message “don’t give up” when the user is running up a hill. The examples aren’t limited to health or exercise. For example, the patent suggests sending messages like “do not fear” and “God is with you” when a “user enters a dangerous neighborhood.”

    The patent’s description of its invention is filled with silly, non-standard acronyms like ABS for “activity based suggestive” messages or EBIF for “electronic based intelligence function.” These silly acronyms create an illusion of complexity where plain, descriptive language would reveal the mundane nature of the supposed invention. For example, what the patent grandly calls EBIF appears to be nothing more than standard computer processing.

    The ’648 patent is owned by Motivational Health Messaging LLC. While this may be a new company, at least one of the people behind it has been involved in massive patent trolling campaigns before. And the two named inventors have both been inventors on patents that trolls have asserted hundreds of times. One is also an inventor listed on patents asserted by infamous patent troll Shipping and Transit LLC. The other named inventor is the inventor on the patents asserted by Electronic Communication Technologies LLC. Those two entities (with their predecessors) brought over 700 lawsuits, many against very small businesses. In other words, the ’648 patent has been issued to Troll Co. at 1 Troll Street, Troll Town, Trollida USA.

    We believe that the claims of the ’648 patent are clearly invalid under the Supreme Court’s decision in Alice v. CLS Bank, which held abstract ideas do not become eligible for a patent merely because they are implemented in conventional computer technology. Indeed, the patent repeatedly emphasizes that the claimed methods are not tied to any particular hardware or software. For example, it states:

    The software and software logic described in this document … which comprises an ordered listing of executable instructions for implementing logical functions, can be embodied in any non-transitory computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions.

    The ’648 patent issued on June 30, 2015, a full year after the Supreme Court’s Alice ruling. Despite this, the patent examiner never even discussed the decision. If Alice is to mean anything at all, it has to be applied to an application like this one.

    In our view, if Motivational Health Messaging asserts its patent in court, any defendant that fought back should prevail under Alice. Indeed, we would hope that the court would strongly consider awarding attorney’s fees to the defendant in such a case. Shipping & Transit has now had two fee awards made against it for asserting patents that are clearly invalid under Alice. And the Federal Circuit recently held that fee awards can be appropriate when patent owners make objectively unreasonable argument concerning Alice.

    In addition to the problems under Alice, we believe the claims of the ’648 patent should have been rejected as obvious. When the application was filed in 2012, there was nothing new about sending motivational messages or automatically tailoring messages to things like location. In one proposed embodiment, the patent suggests that a “user walking to a hole may be delivered ABS messages, including reminders or instructions on how to play a particular hole.” But golf apps were already doing this. The Patent Office didn’t consider any real-world mobile phone applications when reviewing the application.

    If you want to look for prior art yourself, Unified Patents is running a crowdsourcing contest to find the best prior art to invalidate the ’648 patent. Aside from the warm feelings that come from fighting patent trolls, there is a $2000 prize pool.

    Despite the weakness of its patent, Motivational Health Messaging LLC might still send out demand letters. If you receive such a letter, you can contact EFF and we can help you find counsel.

    We have long complained that the Patent Office promotes patent trolling by granting obvious and/or abstract software patents. The history of the ’648 patent shows how the Patent Office’s failure to properly review applications leads to bad patents falling into the hands of trolls.

    read more