Takedown Bots – Make It Personal

Carl Malamud tweeted on 29 March 2016:

Hate takedown bots, both human and coded. If you’re going to accuse somebody of theft, you should make it personal.

in retweeting:

Mitch Stoltz
‏@mitchstoltz

How takedown-bots are censoring the web. https://www.washingtonpost.com/news/the-intersect/wp/2016/03/29/how-were-unwittingly-letting-robots-censor-the-web/ …

Carl has the right of it.

Users should make the use of take down notices very personal.

After all, illegitimate take down notices are thefts from the public domain and/or fair use.

Caitlin Dewey‘s How we’re unwittingly letting robots censor the Web is a great non-technical piece on the fuller report, Notice and Takedown in Everyday Practice.

Jennifer M. Urban, University of California, Berkeley – School of Law, Brianna L. Schofield, University of California, Berkeley – School of Law, and Joe Karaganis, Columbia University – The American Assembly, penned this abstract:

It has been nearly twenty years since section 512 of the Digital Millennium Copyright Act established the so-called notice and takedown process. Despite its importance to copyright holders, online service providers, and Internet speakers, very little empirical research has been done on how effective section 512 is for addressing copyright infringement, spurring online service provider development, or providing due process for notice targets.

This report includes three studies that draw back the curtain on notice and takedown:

1. using detailed surveys and interviews with more than three dozen respondents, the first study gathers information on how online service providers and rightsholders experience and practice notice and takedown on a day-to-day basis;

2. the second study examines a random sample from over 100 million notices generated during a six-month period to see who is sending notices, why, and whether they are valid takedown requests; and

3. the third study looks specifically at a subset of those notices that were sent to Google Image Search.

The findings suggest that whether notice and takedown “works” is highly dependent on who is using it and how it is practiced, though all respondents agreed that the Section 512 safe harbors remain fundamental to the online ecosystem. Perhaps surprisingly in light of large-scale online infringement, a large portion of OSPs still receive relatively few notices and process them by hand. For some major players, however, the scale of online infringement has led to automated, “bot”-based systems that leave little room for human review or discretion, and in a few cases notice and takedown has been abandoned in favor of techniques such as content filtering. The second and third studies revealed surprisingly high percentages of notices of questionable validity, with mistakes made by both “bots” and humans.

The findings strongly suggest that the notice and takedown system is important, under strain, and that there is no “one size fits all” approach to improving it. Based on the findings, we suggest a variety of reforms to law and practice.

At 160 pages it isn’t a quick or lite read.

The gist of both Caitlin’s post and the fuller report is that automated systems are increasingly being used to create and enforce take down requests.

Despite the margin of reported error, Caitlin notes:

Despite the margin of error, most major players seem to be trending away from human review. The next frontier in the online copyright wars is automated filtering: Many rights-holders have pressed for tools that, like YouTube’s Content ID, could automatically identify protected content and prevent it from ever publishing. They’ve also pushed for “staydown” measures that would keep content from being reposted once it’s been removed, a major complaint with the current system.

There is one source Caitlin uses:

…agreed to speak to The Post on condition of anonymity because he has received death threats over his work, said that while his company stresses accuracy and fairness, it’s impossible for seven employees to vet each of the 90,000 links their search spider finds each day. Instead, the algorithm classifies each link as questionable, probable or definite infringement, and humans only review the questionable ones before sending packets of takedown requests to social networks, search engines, file-hosting sites and other online platforms.

Copyright enforcers should discover their thefts from the public domain or infringement on fair use are on a par with car burglars or shoplifters.

What copyright enforcers lack is an incentive to err on the side of not issuing questionable take down notices.

If the consequences of illegitimate take down notices are high enough, they will spend the funds necessary to enforce only “legitimate” rights.

If you are interested in righteousness over effectiveness, by all means, pursue reform of “notice and takedown” in the copyright holder owned US Congress.

On the other hand, someone, more than a single someone, is responsible for honoring “notice and takedown” requests. Those someones also own members of Congress and can effectively seek changes that victims of illegitimate takedown requests cannot.

Imagine a leak from Yahoo! that outs those responsible for honoring “notice and takedown” requests.

Or the members of “Google’s Trusted Copyright Removal Program.” Besides “Glass.”

Or the takedown requests for YouTube.

Theft from the public cannot be sustained in the bright light of transparency.

Comments are closed.