Stupid Patent Data of the Month: the Devil in the Details



  • A Misunderstanding of Data Leads to a Misunderstanding of Patent Law and Policy

    Bad patents shouldn’t be used to stifle competition. A process to challenge bad patents when they improperly issue is important to keeping consumer costs down and encouraging new innovation. But according to a recent post on a patent blog, post-grant procedures at the Patent Office regularly get it “wrong,” and improperly invalidate patents. We took a deep dive into the data being relied upon by patent lobbyists to show that contrary to their arguments, the data they rely on undermines their arguments and conflicts with the claims they’re making.

    The Patent Office has several procedures to determine whether an issued patent was improperly granted to a party that does not meet the legal standard for patentability of an invention. The most significant of these processes is called inter partes review, and is essential to reining in overly broad and bogus patents. The process helps prevent patent trolling by providing a target with a low-cost avenue for defense, so it is harder for trolls to extract a nuisance-value settlement simply because litigating is expensive. The process is, for many reasons, disliked by some patent owners. Congress is taking a new look at this process right now as a result of patent owners’ latest attempts to insulate their patents from review.

    An incorrect claim about the inter partes review (IPR) and other procedures like IPR at the Patent Trial and Appeal Board (PTAB) has been circulating, and was recently repeated in written comments at a congressional hearing by Philip Johnson, former head of intellectual property at Johnson & Johnson. Josh Malone and Steve Brachmann, writing for a patent blog called “IPWatchdog,” are the source of this error. In their article, cited in the comments to Congress, they claim that the PTAB is issuing decisions contrary to district courts at a very high rate.

    We took a closer look at the data they use, and found that the rate is disagreement is actually quite small: about 7%, not the 76% claimed by Malone and Brachmann. How did they get it so wrong? To explain, we’ll have to get into the nuts and bolts of how such an analysis can be run.

    Malone and Brachmann relied on data provided by a service called “Docket Navigator,” which collects statistics and documents related to patent litigation and enforcement. The search they used was to see how many cases Docket Navigator marked as a finding of “unpatentable” (from the Patent Office) and a finding of “not invalid” (from a district court).

    This is a very, very simplistic analysis. For instance, it would consider an unpatentability finding by the PTAB about Claim 1 of a patent to be inconsistent with a district court finding that Claim 54 is not invalid. It would consider a finding of anticipation by the PTAB to be inconsistent with a district court rejecting an argument for invalidity based on a lack of written description. These are entirely different legal issues; different results are hardly inconsistent.

    EFF, along with CCIA, ran the same Docket Navigator search Malone and Brachmann ran for patents found “not invalid” and “unpatentable or not unpatentable,” generating 273 results, and a search for patents found “unpatentable” and “not invalid,” generating 208 results (our analysis includes a few results that weren’t yet available when Malone and Brachmann ran their search). We looked into each of 208 results that Docket Navigator returned for patents found unpatentable and not invalid. Our analysis shows that the “200” number, and consequently the rate at which the Patent Office is supposedly “wrong” based on a comparison to times a court supposedly got it “right” is well off the mark.

    We reached our conclusions based on the following methodology:

    • We considered “inconsistent results” to occur any time the Patent Office reached a determination on any one of the conditions for patentability (namely, any of 35 U.S.C. §§ 101, 102, 103 or 112) and the district court reached a different conclusion based on the same condition for patentability, with some important caveats, as discussed below. For example, if the Patent Office found claims invalid for lack of novelty (35 U.S.C. § 102), we would not treat a district court finding of claims definite (35 U.S.C. § 112(b)) as inconsistent.
    • We did not distinguish between a finding of invalidity or lack of invalidity based on lack of novelty (35 U.S.C. § 102) or obviousness (35 U.S.C. § 103), as these bases are highly related. For example, if the Patent Office determined claims unpatentable based on anticipation, we would mark as inconsistent any jury finding that the claims were not obvious.
    • We did not consider a decision relating the validity of one set of claims to be inconsistent with a decision relating to the validity of a different, distinct set of claims. For example, if the Patent Office found claims 1-5 of a patent not patentable, we would not consider that inconsistent with a district court finding claims 6-10 not invalid. We would count as inconsistent, however, any two differing decisions that overlapped in terms of claims, even if there was not identity of claims.
    • We distinguished between the conditions for patentability of 35 U.S.C. § 112. For example, a district court finding of definiteness under 35 U.S.C. § 112(b) would not treated as inconsistent with a Patent Office finding of lack of written description under 35 U.S.C. § 112(a).
    • We did not consider a district court decision to be inconsistent with Patent Office decision if that district court decision was later overturned by the Federal Circuit. However, we did treat a Patent Office decision as inconsistent with a district court decision even if that Patent Office decision were later reversed.1 For example, if the Patent Office found claims to be not patentable, but the Patent Office was later reversed by the Federal Circuit, we would still mark that decision as inconsistent with the district court. We even counted Patent Office decisions as inconsistent in the five cases where they were affirmed by the Federal Circuit and therefore were correct according to a higher authority than a district court. We did this in order to ensure we included results tending to support Malone and Brachmann’s thesis that the Patent Office was reaching the “wrong” results.
    • We excluded fourteen results that were not the result of any district court finding. Specifically, several patents were included because of findings by the International Trade Commission, an agency (like the Patent Office) which hears cases in a non-Article III court and that does not have a jury. Those results would not meet Malone and Brachmann’s thesis of being considered “valid in full and fair trials in a court of law.”
    • We excluded two results that should not have been included in the set and appear to be a coding error by Docket Navigator. These results were excluded because there was no final decision from the Patent Office as to unpatentability.

    Here’s what we found of the 194 remaining cases:

    • A plurality of the results (n=85) were only included because the Patent Office determined claims were unpatentable based on failure to meet one or more requirements for patentability (usually 35 U.S.C. § 102 or 103) and a district court found the claims met other requirements for patentability (usually 35 U.S.C. § 101 or 112). That is, the district court made no finding whatsoever relating to the reasons why the Patent Office determined the claims should be canceled. Thus the Patent Office and the court did not disagree as to a finding on validity.

    • For example, the Docket Navigator results include U.S. Patent No. 5,563,883. The Patent Office determined claims 1, 3, and 4 of that patent were unpatentable based on obviousness (35 U.S.C. § 103). A district court determined that those same claims however, met the definiteness requirements (35 U.S.C. § 112(b)). The Federal Circuit affirmed the Patent Office’s decision invalidating the claims, and the district court did not decide whether those claims were obvious at all.

    • A further 46 results were situations where either (1) the patent owner requested the Patent Office cancel claims or (2) claims were stipulated to be “valid” as part of a settlement in district court. Thus the Patent Office and the court findings were not inconsistent because at least one of them did not reach any decision on the merits.

      • For example, the Docket Navigator results includes U.S. Patent No. 6,061,551. A jury found claims not invalid, but the Federal Circuit reversed that finding, holding the claims invalid. After that determination, the Patent Owner requested an adverse judgment at the Patent Office.
      • As another example, the Docket Navigator results includes U.S. Patent No. 7,676,411. The Patent Office found claims invalid as abstract (35 U.S.C. § 101) and obvious (35 U.S.C. § 103). Because the parties stipulated that this patent was “valid” as part of settlement, which is generally not considered to be a merits determination, this patent is also tagged as “not invalid” by Docket Navigator.
    • A further 15 results were not inconsistent for a variety of reasons.

      • For example, five results were not inconsistent because the Patent Office and the district court considered different patent claims. As another example, U.S. Patent No. 7,135,641 represented an instance where a jury found claims not invalid, but the district court judge reversed that finding post-trial. As another example, in the district court, U.S. patent 5,371,734 was held “not invalid” on summary judgment, but that determination was later reversed by the Federal Circuit.

    Under this initial cut, only 48 of the entries arguably could be considered to have inconsistent or disagreeing results between the Patent Office and a district court.

    But in the majority of those cases, a judge or jury considered one set of prior art when determining whether the claim was new and nonobvious, but the Patent Office considered a different set (n=28). It is not surprising that the two forums would consider different evidence. The Patent Office proceedings generally only consider certain types of prior art (printed publications). That a district court proceeding may result in a finding of “not invalid” based on, e.g., prior use, is not an inconsistent result.

    Eliminating those results where the Patent Office was considering completely different arguments and art means the total number of times the Patent Office arguably reached a different conclusion than a district court is only 20 times out of 273 that a district court determined a patent “not invalid” for some reason. That means that the Patent Office is “inconsistent” with district courts only 7% of the time, not 76% of the time.

    It is also important to keep in mind that there have been over 1,800 final decisions in inter partes review proceedings, covered business method review proceedings, or post grant review proceedings. In all that though, only 20 times did the Patent Office reach a conclusion that may be considered inconsistent with the district court in ways that negatively impact patent owners. That’s a rate of only around 1% of the time. That’s a remarkably low rate. Moreover, inconsistent results happen even within the court system. For example, in Abbott v. Andrx, 452 F.3d 1331, the Federal Circuit found that Abbott’s patent was likely to be held invalid. But only one year later, in Abbott v. Andrx, 473 F.3d 1196, the Federal Circuit found that the same patent was likely to be not invalid. The two different results were explained by the fact that the two defendants had presented different defenses. This is not unusual. Thus the fact that there may be different results doesn’t lead to a conclusion that the whole system is faulty.

    An analysis like ours with respect to this data set takes time and a few cases might slip through the cracks or be incorrectly coded, but the overall result demonstrates that the vast majority of patent owners are never subject to inconsistent results between district court and the Patent Office.

    It is disappointing that Johnson, Malone, and Brachmann made claims that the data don’t support, but demonstrates a valuable lesson. When using data sets, it is important to understand what, exactly, the data is and how to interpret it. Unfortunately here it looks like an error in understanding the results provided by Docket Navigator by Malone and Brachmann propagated to Johnson’s testimony, and would likely travel further if no one looked harder at it.

    We’ve used both Docket Navigator and Lex Machina in our analyses on numerous occasions, and even briefs we submit to the court. Both services provide extremely valuable information about the state of patent litigation and policy. But its usefulness is diminished where the data they present are not understood. As always, the devil is in the details.

    • 1. For this reason, our results differ slightly from those of CCIA, reported here. CCIA did not treat decisions as inconsistent if the Patent Office decision was later affirmed on appeal. Five patents we considered inconsistent in our analysis were excluded in CCIA’s analysis. Each approach has merit.

    https://www.eff.org/deeplinks/2017/11/stupid-patent-data-month-misunderstanding-data-leads-misunderstanding-patent-law





Tmux Commands

screen and tmux

A comparison of the features (or more-so just a table of notes for accessing some of those features) for GNU screen and BSD-licensed tmux.

The formatting here is simple enough to understand (I would hope). ^ means ctrl+, so ^x is ctrl+x. M- means meta (generally left-alt or escape)+, so M-x is left-alt+x

It should be noted that this is no where near a full feature-set of either group. This - being a cheat-sheet - is just to point out the most very basic features to get you on the road.

Trust the developers and manpage writers more than me. This document is originally from 2009 when tmux was still new - since then both of these programs have had many updates and features added (not all of which have been dutifully noted here).

Action tmux screen
start a new session tmux OR
tmux new OR
tmux new-session
screen
re-attach a detached session tmux attach OR
tmux attach-session
screen-r
re-attach an attached session (detaching it from elsewhere) tmux attach -d OR
tmux attach-session -d
screen -dr
re-attach an attached session (keeping it attached elsewhere) tmux attach OR
tmux attach-session
screen -x
detach from currently attached session ^b d OR
^b :detach
^a ^d OR
^a :detach
rename-window to newname ^b , <newname> OR
^b :rename-window <newn>
^a A <newname>
list windows ^b w ^a w
list windows in chooseable menu ^a "
go to window # ^b # ^a #
go to last-active window ^b l ^a ^a
go to next window ^b n ^a n
go to previous window ^b p ^a p
see keybindings ^b ? ^a ?
list sessions ^b s OR
tmux ls OR
tmux list-sessions
screen -ls
toggle visual bell ^a ^g
create another window ^b c ^a c
exit current shell/window ^d ^d
split window/pane horizontally ^b " ^a S
split window/pane vertically ^b % ^a |
switch to other pane ^b o ^a <tab>
kill the current pane ^b x OR (logout/^D)
collapse the current pane/split (but leave processes running) ^a X
cycle location of panes ^b ^o
swap current pane with previous ^b {
swap current pane with next ^b }
show time ^b t
show numeric values of panes ^b q
toggle zoom-state of current pane (maximize/return current pane) ^b z
break the current pane out of its window (to form new window) ^b !
re-arrange current panels within same window (different layouts) ^b [space]
Kill the current window (and all panes within) ^b killw [target-window]
  • Criteo is an ad company. You may not have heard of them, but they do retargeting, the type of ads that pursue users across the web, beseeching them to purchase a product they once viewed or have already bought. To identify users across websites, Criteo relies on cross-site tracking using cookies and other methods to follow users as they browse. This has led them to try and circumvent the privacy features in Apple’s Safari browser which protects its users from such tracking. Despite this apparently antagonistic attitude towards user privacy, Criteo has also been whitelisted by the Acceptable Ads initiative. This means that their ads are unblocked by popular adblockers such as Adblock and Adblock Plus. Criteo pays Eyeo, the operator of Acceptable Ads, for this whitelisting and must comply with their format requirements. But this also means they can track any user of these adblockers who has not disabled Acceptable Ads, even if they have installed privacy tools such as EasyPrivacy with the intention of protecting themselves. EFF is concerned about Criteo’s continued anti-privacy actions and their continued inclusion in Acceptable Ads.

    Safari Shuts out Third Party Cookies…

    All popular browsers give users control over who gets to set cookies, but Safari is the only one that blocks third-party cookies (those set by a domain other than the site you are visiting) by default. (Safari’s choice is important because only 5-10% of users ever change default settings in software.) Criteo relies on third-party cookies. Since users have little reason to visit Criteo’s own website, the company gets its cookies onto users’ machines through its integration on many online retail websites. Safari’s cookie blocking is a major problem for Criteo, especially given the large and lucrative nature of iPhone’s user base. Rather than accept this, Criteo has repeatedly implemented ways to defeat Safari’s privacy protections.

    One workaround researchers detected Criteo using was to redirect users from sites where their service was present to their own. For example, if you visited wintercoats.com and clicked on a product category, you would be first diverted to criteo.com and then redirected to wintercoats.com/down-filled. Although imperceptible to the user, this detour was enough to persuade the browser that criteo.com is a site you chose to visit, and therefore a first party entitled to set a cookie rather than a third party. Criteo applied for a patent on this method in August 2013.

    …And Closes the Backdoor

    Last summer, however, Apple unveiled a new version of Safari with more sophisticated cookie handling—called Intelligent Tracking Prevention (ITP)—which killed off the redirect technique as a means to circumvent the cookie controls. The browser now analyzes if the user has engaged with a website in a meaningful way before allowing it to set a cookie. The announcement triggered panic among advertising companies, whose trade association, the Interactive Advertising Bureau, denounced the feature and rushed out technical recommendations to work around it. Obviously the level of user “interaction” with Criteo during the redirect described above fails ITP’s test, which meant Criteo was locked out again.

    It appears that Criteo’s response was to abandon cookies for Safari users and to generate a persistent identifier by piggybacking on a key user safety technology called HSTS. When a browser connects to a site via HTTPS (i.e. a site that supports encryption), the site can respond with an HTTP Strict Transport Security policy (HSTS), instructing the browser to only contact it using HTTPS. Without a HSTS policy, your browser might try to connect to the site over regular old unencrypted HTTP in the future—and thus be vulnerable to a downgrade attack. Criteo used HSTS to sneak data into the browser cache to produce an identifier it could use to recognize the individual’s browser and profile them. This approach relied on the fact that it is difficult to clear HSTS data in Safari, requiring the user to purge the cache entirely to delete the identifier. For EFF, it is especially worrisome that Criteo used a technique that pits privacy protection against user security interests by targeting HSTS. Use of this mechanism was documented by Gotham City Research, an investment firm who have bet against Criteo’s stock.

    In early December, Apple released an update to iOS and Safari which disabled Criteo’s ability to exploit HSTS. This led to Criteo revising down their revenue forecasts and a sharp fall in their share price.

    How is Criteo Acceptable Advertising”****?

    "… w__e sort of seek the consent of users, just like we had done before_."__1_ - Erich Eichmann, CEO Criteo

    _"Only users who don’t already have a Criteo identifier will see the header or footer, and it is displayed only once per device. Thanks to [the?] Criteo advertisers network, most of your users would have already accepted our services on the website of another of our partner. On average, only 5% of your users will see the headers or footers, and for those who do, the typical opt-out rate is less than .2%._" - Criteo Support Center

    Criteo styles itself as a leader in privacy practices, yet they have dedicated significant engineering resources to circumventing privacy tools. They claim to have obtained user consent to tracking based on a minimal warning delivered in what we believe to be a highly confusing context. When a user first visits a site containing Criteo’s script, they received a small notice stating, _"_Click any link to use Criteo’s cross-site tracking technology." If the user continues to use the site, they are deemed to have consented. Little wonder that Criteo can boast of a low opt-out rate to their clients.

    Due to their observed behaviour prior to the ITP episode, Criteo’s incorporation into the Acceptable Ads in December 2015 aroused criticism among users of ad blockers. We have written elsewhere about how Acceptable Ads creates a clash of interests between adblocking companies and their users, especially those concerned with their privacy. But Criteo’s participation in Acceptable Ads brings into focus the substantive problem with the program itself. The criteria for Acceptable Ads are concerned chiefly with format and aesthetic aspects (e.g. How big is the ad? How visually intrusive? Does it blink?) and excludes privacy concerns. Retargeting is unpopular and mocked by users, in part because it wears its creepy tracking practices on its sleeve. Our view is that Criteo’s bad behavior should exclude its products from being deemed “acceptable” in any way.

    The fact that the Acceptable Ads Initiative has approved Criteo’s user-tracking-by-misusing-security-features ads is indicative of the privacy problems we believe to be at the heart of the Acceptable Ads program. In March this year, Eyeo announced an Acceptable Ads Committee that will control the criteria for Acceptable Ads in the future. The Committee should start by instituting a rule which excludes companies that circumvent explicit privacy tools or exploit user security technologies for the purpose of tracking.

    1. http://criteo.investorroom.com/download/Transcript_Q3+2017+Earnings_EDITED.pdf

    https://www.eff.org/deeplinks/2017/12/arms-race-against-trackers-safari-leads-criteo-30

    read more
  • Have you ever sent a motivational text to a friend? If you have, perhaps you tailored your message to an activity or location by saying “Good luck in the race!” or “Have fun in New York!” Now, imagine doing this automatically with a compuuuter. What a great invention. Actually, no. That’s not a good invention, it’s our latest Stupid Patent of the Month.

    U.S. Patent No. 9,069,648 is titled “Systems and methods for delivering activity based suggestive (ABS) messages.” The patent describes sending “motivational messages,” based “on the current or anticipated activity of the user,” to a “personal electronic device.” The patent provides examples such as sending the message “don’t give up” when the user is running up a hill. The examples aren’t limited to health or exercise. For example, the patent suggests sending messages like “do not fear” and “God is with you” when a “user enters a dangerous neighborhood.”

    The patent’s description of its invention is filled with silly, non-standard acronyms like ABS for “activity based suggestive” messages or EBIF for “electronic based intelligence function.” These silly acronyms create an illusion of complexity where plain, descriptive language would reveal the mundane nature of the supposed invention. For example, what the patent grandly calls EBIF appears to be nothing more than standard computer processing.

    The ’648 patent is owned by Motivational Health Messaging LLC. While this may be a new company, at least one of the people behind it has been involved in massive patent trolling campaigns before. And the two named inventors have both been inventors on patents that trolls have asserted hundreds of times. One is also an inventor listed on patents asserted by infamous patent troll Shipping and Transit LLC. The other named inventor is the inventor on the patents asserted by Electronic Communication Technologies LLC. Those two entities (with their predecessors) brought over 700 lawsuits, many against very small businesses. In other words, the ’648 patent has been issued to Troll Co. at 1 Troll Street, Troll Town, Trollida USA.

    We believe that the claims of the ’648 patent are clearly invalid under the Supreme Court’s decision in Alice v. CLS Bank, which held abstract ideas do not become eligible for a patent merely because they are implemented in conventional computer technology. Indeed, the patent repeatedly emphasizes that the claimed methods are not tied to any particular hardware or software. For example, it states:

    The software and software logic described in this document … which comprises an ordered listing of executable instructions for implementing logical functions, can be embodied in any non-transitory computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions.

    The ’648 patent issued on June 30, 2015, a full year after the Supreme Court’s Alice ruling. Despite this, the patent examiner never even discussed the decision. If Alice is to mean anything at all, it has to be applied to an application like this one.

    In our view, if Motivational Health Messaging asserts its patent in court, any defendant that fought back should prevail under Alice. Indeed, we would hope that the court would strongly consider awarding attorney’s fees to the defendant in such a case. Shipping & Transit has now had two fee awards made against it for asserting patents that are clearly invalid under Alice. And the Federal Circuit recently held that fee awards can be appropriate when patent owners make objectively unreasonable argument concerning Alice.

    In addition to the problems under Alice, we believe the claims of the ’648 patent should have been rejected as obvious. When the application was filed in 2012, there was nothing new about sending motivational messages or automatically tailoring messages to things like location. In one proposed embodiment, the patent suggests that a “user walking to a hole may be delivered ABS messages, including reminders or instructions on how to play a particular hole.” But golf apps were already doing this. The Patent Office didn’t consider any real-world mobile phone applications when reviewing the application.

    If you want to look for prior art yourself, Unified Patents is running a crowdsourcing contest to find the best prior art to invalidate the ’648 patent. Aside from the warm feelings that come from fighting patent trolls, there is a $2000 prize pool.

    Despite the weakness of its patent, Motivational Health Messaging LLC might still send out demand letters. If you receive such a letter, you can contact EFF and we can help you find counsel.

    We have long complained that the Patent Office promotes patent trolling by granting obvious and/or abstract software patents. The history of the ’648 patent shows how the Patent Office’s failure to properly review applications leads to bad patents falling into the hands of trolls.

    read more
});