Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

February 18, 2019

UK Parliament Pouts About Facebook – Alternative History

Filed under: Facebook,Fair Use — Patrick Durusau @ 10:39 am

I followed Facebook labelled ‘digital gangsters’ by report on fake news by David Pegg to find Disinformation and ‘fake news’: Final Report published, which does have a link to Disinformation and ‘fake news’: Final Report, an eleventy-one page pout labeling Facebook “digital gangsters” (pages 43 and 91, if you are interested).

The report recommends Parliament respond to the invention of the movable type printing press:

MPs conclude: “[Printing presses] cannot hide behind the claim of being merely a ‘platform’ and maintain that they have no responsibility themselves in regulating the content [they produce].” (alternative history edits added)

Further, the printing press has enable broadsheets, without indentifying the sources of their content, to put democracy at risk:

“Democracy is at risk from the malicious and relentless targeting of citizens with disinformation and personalised ‘dark adverts’ from unidentifiable sources, delivered through the major [broad sheets and newspapers] we use everyday. Much of this is directed from agencies working in foreign countries, including Russia.

For obscure reasons, the report calls for changing the current practice of foreign players interfering in elections and governments of others, saying:

“The UK is clearly vulnerable to covert digital influence campaigns and the Government should be conducting analysis to understand the extent of the targeting of voters, by foreign players, during past elections.” The Government should consider whether current legislation to protect the electoral process from malign influence is sufficient. Legislation should be explicit on the illegal influencing of the democratic process by foreign players.

The UK, its allies and enemies have been interfering in each others’ elections, governments and internal affairs for centuries. The rush to insulate the UK and its long time partner in interference, the United States, from “illegal interference” is a radical departure from current international norms.

On the whole, the report struts and pouts as only a UK parliament committee, spurned by Mark Zuckerberg, not once, not twice, but three times, can.

There’s no new information in the report but more repetition that can be stacked up and then cited to make questionable claims less so. Oh, that’s one of the alleged tactics of disinformation isn’t it?

Can we say that “disinformation,” “interference,” and “influencing” are in the eye of the beholder?

PS: The only legislation I would support for social media platform is the prohibition of any terms of service that bar any content. Social media platforms should be truly content neutral. If you can digitize it, it should be posted. Filtering is the answer to offensive content. Users have no right to censor what other readers choose to consume.

March 29, 2017

What’s Up With Data Padding? (Regulations.gov)

Filed under: Data Quality,Fair Use,Government Data,Intellectual Property (IP),Transparency — Patrick Durusau @ 10:41 am

I forgot to mention in Copyright Troll Hunting – 92,398 Possibles -> 146 Possibles that while using LibreOffice, I deleted a large number of either N/A only or columns not relevant for troll-mining.zip.

Except as otherwise noted, after removal of “no last name,” these fields had N/A for all records except as noted:

  1. L – Implementation Date
  2. M – Effective Date
  3. N – Related RINs
  4. O – Document SubType (Comment(s))
  5. P – Subject
  6. Q – Abstract
  7. R – Status – (Posted, except for 2)
  8. S – Source Citation
  9. T – OMB Approval Number
  10. U – FR Citation
  11. V – Federal Register Number (8 exceptions)
  12. W – Start End Page (8 exceptions)
  13. X – Special Instructions
  14. Y – Legacy ID
  15. Z – Post Mark Date
  16. AA – File Type (1 docx)
  17. AB – Number of Pages
  18. AC – Paper Width
  19. AD – Paper Length
  20. AE – Exhibit Type
  21. AF – Exhibit Location
  22. AG – Document Field_1
  23. AH – Document Field_2

Regulations.gov, not the Copyright Office, is responsible for the collection and management of comments, including the bulked up export of comments.

From the state of the records, one suspects the “bulking up” is NOT an artifact of the export but represents the storage of each record.

One way to test that theory would be a query on the noise fields via the API for Regulations.gov.

The documentation for the API is out-dated, the Field References documentation lacks the Document Detail (field AI), which contains the URL to access the comment.

The closest thing I could find was:

fileFormats Formats of the document, included as URLs to download from the API

How easy/hard it will be to download attachments isn’t clear.

BTW, the comment pages themselves are seriously puffed up. Take https://www.regulations.gov/document?D=COLC-2015-0013-52236.

Saved to disk: 148.6 KB.

Content of the comment: 2.5 KB.

The content of the comment is 1.6 % of the delivered webpage.

It must have taken serious effort to achieve a 98.4% noise to 1.6% signal ratio.

How transparent is data when you have to mine for the 1.6% that is actual content?

March 28, 2017

Copyright Troll Hunting – 92,398 Possibles -> 146 Possibles

Filed under: Fair Use,Intellectual Property (IP) — Patrick Durusau @ 6:51 pm

When hunting copyright trolls, well, trolls of any kind, the smaller the number to be hunted the better.

The Copyright Office is conducting the Section 512 Study, which it describes as:

The United States Copyright Office is undertaking a public study to evaluate the impact and effectiveness of the safe harbor provisions contained in section 512 of title 17, United States Code.

Enacted in 1998 as part of the Digital Millennium Copyright Act (“DMCA”), section 512 established a system for copyright owners and online entities to address online infringement, including limitations on liability for compliant service providers to help foster the growth of internet-based services. Congress intended for copyright owners and internet service providers to cooperate to detect and address copyright infringements. To qualify for protection from infringement liability, a service provider must fulfill certain requirements, generally consisting of implementing measures to expeditiously address online copyright infringement.

While Congress understood that it would be essential to address online infringement as the internet continued to grow, it may have been difficult to anticipate the online world as we now know it, where each day users upload hundreds of millions of photos, videos and other items, and service providers receive over a million notices of alleged infringement. The growth of the internet has highlighted issues concerning section 512 that appear ripe for study. Accordingly, as recommended by the Register of Copyrights, Maria A. Pallante, in testimony and requested by Ranking Member Conyers at an April 2015 House Judiciary Committee hearing, the Office is initiating a study to evaluate the impact and effectiveness of section 512 and has issued a Notice of Inquiry requesting public comment. Among other issues, the Office will consider the costs and burdens of the notice-and-takedown process on large- and small-scale copyright owners, online service providers, and the general public. The Office will also review how successfully section 512 addresses online infringement and protects against improper takedown notices.

The Office received over 92,000 written submissions by the April 1, 2016 deadline for the first round of public comments. The Office then held public roundtables on May 2nd and 3rd in New York and May 12th and 13th in San Francisco to seek further input on the section 512 study. Transcripts of the New York and San Francisco roundtables are now available online. Additional written public comments are due by 11:59 pm EST on February 21, 2017 and written submissions of empirical research are due by 11:59 pm EST on March 22, 2017.

You can see the comments at: Requests for Public Comments: Digital Millennium Copyright Act Safe Harbor Provisions, all 92,398 of them.

You can even export them to a CSV file, which runs a little over 33.5 MB in size.

It is likely that the same copyright trolls who provoked this review with non-pubic comments to the Copyright Office and others posted comments, but how to find them in a sea of 92,398 comments?

Some simplifying assumptions:

No self-respecting copyright troll will use the public comment template.

grep -v "Template Form Comment" DOCKET_COLC-2015-0013.csv | wc -l

Using grep with the -v means it does NOT return matching lines. That is only lines without “Template Form Comment” will be returned.

We modify that to read:

grep -v "Template Form Comment" DOCKET_COLC-2015-0013.csv > no-form.csv

The > pipe adds the lines without “Template Form Comment” to the file no-form.csv.

Next, scanning the file we notice, “no last name/No Last Name.”

grep -iv "no last name" no-form.csv | wc -l

Where grep has -i and -v, means case is ignored for the search string “no last name” and in the file, no-form.csv. The -v option gives us only line without “no last name.”

The count without “no last name:” 3359.

A lot better than 92,398 but not really good enough.

Nearing hand-editing so I resorted to LibreOffice at this point.

Sort on column D (out of A to AI) organization. If you scroll down, line 123 has N/A for organization. The entry just prior to it is “musicnotes.” What? Where did Sony, etc., go?

Ah, LibreOffice sorted organizations and counted “N/A” as an organization’s name.

Let’s see, from row 123 to row 3293, inclusive.

Well, deleting those rows leaves us with: 183 rows.

I continued by deleting comments by anonymous, individuals, etc., and my final total is 146 rows.

Check out troll-mining.zip!

Not all copyright trolls mind you, I need to remove the Internet Archive, EFF and other people on the right side of the section 512 issue.

Who else should I remove?

Couple of reasons for a clean copyright troll list.

First, it leads to FOIA requests about other communications to the Copyright Office by the trolls in the list. Can’t ask if you don’t have a troll list.

Second, it provides locations for protests and other ways to call unwanted attention to these trolls.

Third, well, you know, unfortunate things happen to trolls. It’s a natural consequence of a life predicated upon harming others.

June 18, 2016

A Plausible Explanation For The EC Human Brain Project

Filed under: Fair Use,Intellectual Property (IP) — Patrick Durusau @ 2:24 pm

I have puzzled for years over how to explain the EC’s Human Brain Project. See The EC Brain if you need background on this ongoing farce.

While reading Reject Europe’s Plans To Tax Links and Platforms by Jeremy Malcolm, I suddenly understood the motivation for the Human Brain Project!

From the post:

A European Commission proposal to give new copyright-like veto powers to publishers could prevent quotation and linking from news articles without permission and payment. The Copyright for Creativity coalition (of which EFF is a member) has put together an easy survey and answering guide to guide you through the process of submitting your views before the consultation for this “link tax” proposal winds up on 15 June.

Since the consultation was opened, the Commission has given us a peek into some of the industry pressures that have motivated what is, on the face of it, otherwise an inexplicable proposal. In the synopsis report that accompanied the release of its Communication on Online Platforms, it writes that “Right-holders from the images sector and press publishers mention the negative impact of search engines and news aggregators that take away some of the traffic on their websites.” However, this claim is counter-factual, as search engines and aggregators are demonstrably responsible for driving significant traffic to news publishers’ websites. This was proved when a study conducted in the wake of introduction of a Spanish link tax resulted in a 6% decline in traffic to news websites, which was even greater for the smaller sites.

There is a severe shortage of human brains at the European Commission! The Human Brain Project is a failing attempt to remedy that shortage of human brains.

Before you get angry, Europe is full of extremely fine brains. But that isn’t the same thing as saying they found at the European Commission.

Consider for example, the farcical request for comments, having previously decided the outcome as cited above. EC customary favoritism and heavy-handedness.

I would not waste electrons submitting comments to the EC on this issue.

Spend your time mining EU news sources and making fair use of their content. Every now and again, gather up your links and send them to the publications and copy the EC. So publications can see the benefits of your linking versus the overhead of the EC.

As the Spanish link tax experience proves, link taxes may deceive property cultists into expecting a windfall, in truth their revenue will decrease and what revenue is collected, will go to the EC.

There’s the mark of a true EC solution:

The intended “beneficiary” is worse off and the EC absorbs what revenue, if any, results.

June 7, 2016

Doctorow on Encrypted Media Extensions (EME) @ W3C and DRM

Filed under: Fair Use,Government,Intellectual Property (IP) — Patrick Durusau @ 7:17 pm

Cory Doctorow outlines the important public policy issues semi-hidden in W3C efforts to standardize Encrypted Media Extensions (EME).

I knew I would agree with Cory’s points, more or less, before even reading the post. But I also knew that many of his points, if not all, aren’t going to be persuasive to some in the DRM discussion.

If you already favor reasonable accommodation between consumers of content and rightsholders, recognition of “fair use,” and allowances for research and innovation, enjoy Cory’s post and do what you can to support the EFF and others in this particular dispute.

If you are currently a rightsholder and strong supporter of DRM, I don’t think Cory’s post is going to be all that persuasive.

Rather than focusing on public good, research, innovation, etc., I have a very different argument for rightsholders, who I distinguish from people who will profit from DRM and its implementations.

I will lay out all the nuances either tomorrow or the next day, but the crux of my argument is the question: “What is the ROI for rightsholders from DRM?

You will be able to satisfy yourself of my analysis, using your own confidential financial statements. The real ones, not the ones you show the taxman.

To be sure, someone intends to profit from DRM and its implementation, but it isn’t who you think it is.

In the meantime, enjoy Cory’s post!

May 28, 2016

Pamela Samuelson on Java and Fair Use – Test For Prospective Employers

Filed under: Fair Use,Intellectual Property (IP) — Patrick Durusau @ 8:43 pm

Pamela Samuelson has posted a coherent and compelling narrative on why the Java API victory of Google over Oracle is a very good thing.

Here’s where she comes out:


Developers of software need some simple norms to live by. One such norm is that independent reimplementation of an API in one’s own original code does not infringe copyright. That’s the law as well as good public policy. The public has greatly benefited by the existence of this norm because anyone with a creative software idea can write programs that will run on existing platforms. The software industry has thrived under this norm, and the public has a wide array of choices of innovative programs in a competitive marketplace.

Put Pamela’s analysis to good use.

Ask at your next interview if the prospective employer agrees with Pamela’s post.

It’s 877 words and can double as an attention span test for the interviewer.

Ask before you leap.

Danger! Danger! Oracle Attorney Defends GPL

Filed under: Fair Use,Intellectual Property (IP),Open Source — Patrick Durusau @ 10:49 am

Op-ed: Oracle attorney says Google’s court victory might kill the GPL by Annette Hurst.

From the header:

Annette Hurst is an attorney at Orrick, Herrington & Sutcliffe who represented Oracle in the recent Oracle v. Google trial. This op-ed represents her own views and is not intended to represent those of her client or Ars Technica.

The Oracle v. Google trial concluded yesterday when a jury returned a verdict in Google’s favor. The litigation began in 2010, when Oracle sued Google, saying that the use of Java APIs in Android violated copyright law. After a 2012 trial, a judge held that APIs can’t be copyrighted at all, but that ruling was overturned on appeal. In the trial this month, Google successfully argued that its use of Java APIs, about 11,500 lines of code in all, was protected by “fair use.”

I won’t propogate Annette’s rant but you can read it for yourself at: http://arstechnica.com/tech-policy/2016/05/op-ed-oracle-attorney-says-googles-court-victory-might-kill-the-gpl/.

What are free software supporters to make of their long time deranged, drooling critic expressing support for GPL?

Should they flee as pursued by wraiths on wings?

Should they stuff their cloaks in their ears?

Are these like the lies of Suraman?

Or perhaps better, Wormtongue?

My suggestion? Point to Annette’s rant to alert others but don’t repeat it, don’t engage it, just pass over it in silence.

Repeating evil counsel gives it legitimacy.

Yours.

May 22, 2016

Does social media have a censorship problem? (Only if “arbitrary and knee-jerk?”)

Filed under: Censorship,Fair Use,Free Speech — Patrick Durusau @ 9:28 pm

Does social media have a censorship problem? by Ryan McChrystal.

From the post:


It is for this reason that we should be concerned by content moderators. Worryingly, they often find themselves dealing with issues they have no expertise in. A lot of content takedown reported to Online Censorship is anti-terrorist content mistaken for terrorist content. “It potentially discourages those very people who are going to be speaking out against terrorism,” says York.

Facebook has 1.5 billion users, so small teams of poorly paid content moderators simply cannot give appropriate consideration to all flagged content against the secretive terms and conditions laid out by social media companies. The result is arbitrary and knee-jerk censorship.

Yes, social media has a censorship problem. But not only when they lack “expertise” but when they attempt censorship at all.

Ryan’s post (whether Ryan thinks this or not I don’t know) presumes two kinds of censorship:

Bad Censorship: arbitrary and knee-jerk

Good Censorship: guided by expertise in a subject area

Bad is the only category for censorship. (period, full stop)

Although social media companies are not government agencies and not bound by laws concerning free speech, Ryan’s recitals about Facebook censorship should give you pause.

Do you really want social media companies, whatever their intentions, not only censoring present content but obliterating comments history on a whim?

Being mindful that today you may agree with their decision but tomorrow may tell another tale.

Social media has a very serious censorship problem, mostly borne of the notion that social media companies should be the arbiters of social discourse.

I prefer the hazards and dangers of unfettered free speech over discussions bounded by the Joseph Goebbels imitators of a new age.

Suggestions for non-censoring or the least censoring social media platforms?

May 18, 2016

Colleges Shouldn’t Have to Deal With Copyright Monitoring [Broods of Copyright Vipers]

Filed under: Fair Use,Intellectual Property (IP) — Patrick Durusau @ 3:20 pm

Colleges Shouldn’t Have to Deal With Copyright Monitoring by Pamela Samuelson.

From the post:

Colleges have a big stake in the outcome of the lawsuit that three publishers, Cambridge University Press, Oxford University Press, and Sage Publications, brought against Georgia State University officials for copyright infringement. The lawsuit, now in its eighth year, challenged GSU’s policy that allowed faculty members to upload excerpts (mainly chapters) of in-copyright books for students to read and download from online course repositories.

Four years ago, a trial court held that 70 of the 75 challenged uses were fair uses. Two years ago, an appellate court sent the case back for a reassessment under a revised fair-use standard. The trial court has just recently ruled that of the 48 claims remaining in the case, only four uses, each involving multiple chapters, infringed. The question now is, What should be the remedy for those four infringements?

Sage was the only publisher that prevailed at all, and it lost more infringement claims than it won. Cambridge and Oxford came away empty-handed. Despite the narrowness of Sage’s win, all three publishers have asked the court for a permanent injunction that would impose many new duties on GSU and require close monitoring of all faculty uploads to online course repositories.

I expected better out of Cambridge and Oxford, especially Cambridge, which has in recent years allowed free electronic access to some printed textbooks.

Sage and the losing publishers, Cambridge and Oxford, seek to chill the exercise of fair use by not only Georgia State University but universities everywhere.

Pamela details the outrageous nature of the demands made by the publishers and concludes that she is rooting for GSU on appeal.

We should all root for GSU on appeal but that seems so unsatisfying.

It does nothing to darken the day for the broods of copyright vipers at Cambridge, Oxford or Sage.

In addition to creating this money pit for their publishers, the copyright vipers want to pad their nests by:


As if that were not enough, the publishers want the court to require GSU to provide them with access to the university’s online course system and to relevant records so the publishers could confirm that the university had complied with the record-keeping and monitoring obligations. The publishers have asked the court to retain jurisdiction so that they could later ask it to reopen and modify the court order concerning GSU compliance measures.

I don’t know how familiar you are with academic publishing but every academic publisher has a copyright department that shares physical space with acquisitions and publishing.

Whereas acquisitions and publishing are concerned with collection and dissemination of knowledge, while recovering enough profit to remain viable, the copyright department could just as well by employed by Screw.

Expanding the employment rolls of copyright departments to monitor fair use by publishers is another drain on their respective publishers.

If you need proof of copyright departments being a dead loss for their publishers, consider the most recent annual reports for Cambridge and Oxford.

Does either one highlight their copyright departments as centers of exciting development and income? Do they tout this eight year long battle against fair use?

No? I didn’t think so but wanted your confirmation to be sure.

I can point you to a history of Sage, but as a privately held publisher, it has no public annual report. Even that history, over changing economic times in publishing, finds no space to extol its copyright vipers and their role in the GSU case.

Beyond rooting for GSU, work with the acquisitions and publication departments at Cambridge, Oxford and Sage, to help improve their bottom line profit and drown their respective broods of copyright vipers.

How?

Before you sign a publishing agreement, ask your publisher for a verified statement of the ROI contributed by their copyright office.

If enough of us ask, the question will resonant across the academic publishing community.

April 1, 2016

Takedowns Hurt Free Expression

Filed under: Fair Use,Free Speech,Intellectual Property (IP) — Patrick Durusau @ 9:00 pm

EFF to Copyright Office: Improper Content Takedowns Hurt Online Free Expression.

From the post:

Content takedowns based on unfounded copyright claims are hurting online free expression, the Electronic Frontier Foundation (EFF) told the U.S. Copyright Office Friday, arguing that any reform of the Digital Millennium Copyright Act (DMCA) should focus on protecting Internet speech and creativity.

EFF’s written comments were filed as part of a series of studies on the effectiveness of the DMCA, begun by the Copyright Office this year. This round of public comments focuses on Section 512, which provides a notice-and-takedown process for addressing online copyright infringement, as well as “safe harbors” for Internet services that comply.

“One of the central questions of the study is whether the safe harbors are working as intended, and the answer is largely yes,” said EFF Legal Director Corynne McSherry. “The safe harbors were supposed to give rightsholders streamlined tools to police infringement, and give service providers clear rules so they could avoid liability for the potentially infringing acts of their users. Without those safe harbors, the Internet as we know it simply wouldn’t exist, and our ability to create, innovate, and share ideas would suffer.”

As EFF also notes in its comments, however, the notice-and-takedown process is often abused. A recent report found that the notice-and-takedown system is riddled with errors, misuse, and overreach, leaving much legal and legitimate content offline. EFF’s comments describe numerous examples of bad takedowns, including many that seemed based on automated content filters employed by the major online content sharing services. In Friday’s comments, EFF outlined parameters endorsed by many public interest groups to rein in filtering technologies and protect users from unfounded blocks and takedowns.

A must read whether you are interested in pursuing traditional relief or have more immediate consequences for rightsholders in mind.

Takedowns cry out for the application of data mining to identify the people who pursue takedowns, the use of takedowns, who benefits, to say nothing of the bots that are presently prowling the web looking for new victims.

I for one don’t imagine that rightsholders bots are better written than most government software (you did hear about State’s latest vulnerability?).

Sharpening your data skills on takedown data would benefit you and the public, which is being sorely abused at the moment.

March 29, 2016

Takedown Bots – Make It Personal

Filed under: Fair Use,Intellectual Property (IP) — Patrick Durusau @ 8:26 pm

Carl Malamud tweeted on 29 March 2016:

Hate takedown bots, both human and coded. If you’re going to accuse somebody of theft, you should make it personal.

in retweeting:

Mitch Stoltz
‏@mitchstoltz

How takedown-bots are censoring the web. https://www.washingtonpost.com/news/the-intersect/wp/2016/03/29/how-were-unwittingly-letting-robots-censor-the-web/ …

Carl has the right of it.

Users should make the use of take down notices very personal.

After all, illegitimate take down notices are thefts from the public domain and/or fair use.

Caitlin Dewey‘s How we’re unwittingly letting robots censor the Web is a great non-technical piece on the fuller report, Notice and Takedown in Everyday Practice.

Jennifer M. Urban, University of California, Berkeley – School of Law, Brianna L. Schofield, University of California, Berkeley – School of Law, and Joe Karaganis, Columbia University – The American Assembly, penned this abstract:

It has been nearly twenty years since section 512 of the Digital Millennium Copyright Act established the so-called notice and takedown process. Despite its importance to copyright holders, online service providers, and Internet speakers, very little empirical research has been done on how effective section 512 is for addressing copyright infringement, spurring online service provider development, or providing due process for notice targets.

This report includes three studies that draw back the curtain on notice and takedown:

1. using detailed surveys and interviews with more than three dozen respondents, the first study gathers information on how online service providers and rightsholders experience and practice notice and takedown on a day-to-day basis;

2. the second study examines a random sample from over 100 million notices generated during a six-month period to see who is sending notices, why, and whether they are valid takedown requests; and

3. the third study looks specifically at a subset of those notices that were sent to Google Image Search.

The findings suggest that whether notice and takedown “works” is highly dependent on who is using it and how it is practiced, though all respondents agreed that the Section 512 safe harbors remain fundamental to the online ecosystem. Perhaps surprisingly in light of large-scale online infringement, a large portion of OSPs still receive relatively few notices and process them by hand. For some major players, however, the scale of online infringement has led to automated, “bot”-based systems that leave little room for human review or discretion, and in a few cases notice and takedown has been abandoned in favor of techniques such as content filtering. The second and third studies revealed surprisingly high percentages of notices of questionable validity, with mistakes made by both “bots” and humans.

The findings strongly suggest that the notice and takedown system is important, under strain, and that there is no “one size fits all” approach to improving it. Based on the findings, we suggest a variety of reforms to law and practice.

At 160 pages it isn’t a quick or lite read.

The gist of both Caitlin’s post and the fuller report is that automated systems are increasingly being used to create and enforce take down requests.

Despite the margin of reported error, Caitlin notes:

Despite the margin of error, most major players seem to be trending away from human review. The next frontier in the online copyright wars is automated filtering: Many rights-holders have pressed for tools that, like YouTube’s Content ID, could automatically identify protected content and prevent it from ever publishing. They’ve also pushed for “staydown” measures that would keep content from being reposted once it’s been removed, a major complaint with the current system.

There is one source Caitlin uses:

…agreed to speak to The Post on condition of anonymity because he has received death threats over his work, said that while his company stresses accuracy and fairness, it’s impossible for seven employees to vet each of the 90,000 links their search spider finds each day. Instead, the algorithm classifies each link as questionable, probable or definite infringement, and humans only review the questionable ones before sending packets of takedown requests to social networks, search engines, file-hosting sites and other online platforms.

Copyright enforcers should discover their thefts from the public domain or infringement on fair use are on a par with car burglars or shoplifters.

What copyright enforcers lack is an incentive to err on the side of not issuing questionable take down notices.

If the consequences of illegitimate take down notices are high enough, they will spend the funds necessary to enforce only “legitimate” rights.

If you are interested in righteousness over effectiveness, by all means, pursue reform of “notice and takedown” in the copyright holder owned US Congress.

On the other hand, someone, more than a single someone, is responsible for honoring “notice and takedown” requests. Those someones also own members of Congress and can effectively seek changes that victims of illegitimate takedown requests cannot.

Imagine a leak from Yahoo! that outs those responsible for honoring “notice and takedown” requests.

Or the members of “Google’s Trusted Copyright Removal Program.” Besides “Glass.”

Or the takedown requests for YouTube.

Theft from the public cannot be sustained in the bright light of transparency.

February 3, 2016

Google Paywall Loophole Going Bye-Bye [Fair Use Driving Pay-Per-View Traffic]

Filed under: Cybersecurity,Fair Use,Intellectual Property (IP) — Patrick Durusau @ 2:51 pm

The Wall Street Journal tests closing the Google paywall loophole by Lucia Moses.

From the post:

The Wall Street Journal has long had a strict paywall — unless you simply copy and paste the headline into Google, a favored route for those not wanting to pony up $200 a year. Some users have noticed in recent days that the trick isn’t working.

A Journal spokesperson said the publisher was running a test to see if doing so would entice would-be subscribers to pay up. The rep wouldn’t elaborate on how long and extensive the experiment was and if permanently closing the loophole was a possible outcome.

“We are experimenting with a number of different trial mechanics at the moment to provide a better subscription taster for potential new customers,” the rep said. “We are a subscription site and we are always looking at better ways to optimize The Wall Street Journal experience for our members.”

The Wall Street Journal can deprive itself of the benefits of “fair use” if it wants to, but is that a sensible position?

Fair Use Benefits the Wall Street Journal

Rather than a total ban on copying, what if the amount of an article that can be copied is set by algorithm? Such that at a minimum, the first two or three paragraphs of any story can be copied, whether you arrive from Google or directly on the WSJ site.

Think about it. Wall Street Journal readers aren’t paying to skim the lead paragraphs in the WSJ. They are paying to see the full story and analysis in particular subject areas.

Bloggers, such as myself, cannot drive content seekers to the WSJ because the first sentence or two isn’t enough for readers to develop an interest in the WSJ report.

If I could quote the first 2 or 3 paragraphs, add in some commentary and perhaps other links, then a visitor to the WSJ is visiting to see the full content the Wall Street Journal has to offer.

The story lead is acting, as it should, to drive traffic to the Wall Street Journal, possibly from readers who won’t otherwise think of the Wall Street Journal. Some of my readers on non-American/European continents for example.

Bloggers Driving Readers to Wall Street Journal Pay-Per-View Content

By developing algorithmic fair use as I describe it would enlist an army of bloggers in spreading notice of pay-per-view content of the Wall Street Journal, at no expense to the Wall Street Journal. As a matter of fact, bloggers would be alerting readers of pay-per-view WSJ content, at the blogger’s own expense.

It may just be me but if someone were going to drive viewers to pay-per-view content on my site, at their own expense, with fair use of content, I would be insane to prevent that. But, I’m not the one grasping at dimes while $100 bills are flying overhead.

Close the Loophole, Open Up Fair Use

Full disclosure, I don’t have any evidence for fair use driving traffic to the Wall Street Journal because that evidence doesn’t exist. The Wall Street Journal would have to enable fair use and track appearance of fair use content and the traffic originating from it. Along with conversions from that additional traffic.

Straight forward data analytics but it won’t happen by itself. When the WSJ succeeds with such a model, you can be sure that other paywall publishers will be quick to follow suite.

Caveat: Yes, there will be people who will only ever consume the free use content. And your question? If they aren’t ever going to be paying customers and the same fair use is delivering paying customers, will you lose the latter in order to spite the former?

Isn’t that like cutting off your nose to spite your face?

Historical PS:

I once worked for a publisher that felt a “moral obligation,” their words, not mine, to prevent anyone from claiming a missing journal issue to which they might not be entitled. Yeah. Journal issues that were as popular as the Watchtower is among non-Jehovah’s Witnesses. Cost to the publisher, about $3.00 per issue, cost to verify entitlement, a full time position at the publisher.

I suspect claims ran less than 200 per year. My suggestion was to answer any request with thanks, here’s your missing copy. End of transaction. Track claims only to prevent abuse. Moral outrage followed.

Is morality the basis for your pay-per-view access policy? I thought pay-per-view was a way to make money.

Pass this post along to the WSJ if you know anyone there. Free suggestion. Perhaps they will be interested in other, non-free suggestions.

September 14, 2015

‘Dancing Baby’ Wins Copyright Case

Filed under: Fair Use,Intellectual Property (IP) — Patrick Durusau @ 9:00 pm

‘Dancing Baby’ Wins Copyright Case by Laura Wagner.

From the post:

A baby bobs up and down in a kitchen, as a Prince song plays in the background. His mother laughs in the background and his older sister zooms in and out of the frame.

This innocuous 29-second home video clip was posted to YouTube in 2007 and sparked a long legal proceeding on copyright and fair use law.

In the case, Lenz v. Universal — which has gained notoriety as the “dancing baby” lawsuit — Universal Music Group sent YouTube a warning to take the video down, claiming copyright infringement under the Digital Millennium Copyright Act. Then, Stephanie Lenz, poster of the video and mother of the baby, represented by Electronic Frontier Foundation, sued Universal for wrongly targeting lawful fair use.

Today, eight years later, a federal appeals court has sided with the dancing baby.

If you need more legal background on the issues, consider the EFF page on Lenz v. Universal (links to original court documents), or the Digital Media Law page, Universal Music v. Lenz.

The DMCA (Digital Millennium Copyright Act) should be amended to presume fair use unless and until the complaining party convince a court that someone else is profiting from the use of their property. No profit, no foul. No more non-judicial demand for take downs of any content.

June 10, 2014

2nd Circuit on Fair Use

Filed under: Fair Use — Patrick Durusau @ 3:59 pm

In win for libraries, court rules database of Google-scanned books is “fair use” by Jeff John Roberts.

From the post:

A federal appeals court ruled on Tuesday that the Hathi Trust, a searchable collection of digital books controlled by university libraries, does not violate copyright, and that the libraries can continue to make copies for digitally-impaired readers.

The decision is a setback for the Authors Guild and for other groups of copyright holders who joined the lawsuit to shut down the Hathi Trust’s operations. By contrast, it is a victory for many scholars and librarians who regard the database as an invaluable repository of knowledge.

Of particular interest to those of us interested in creating topic maps based upon currently copyrighted material:

Do not attempt work on copyrighted material without local legal advice on your particular case, but be aware the court has held:

the creation of a full-text searchable database is a quintessentially transformative use…

of a copyrighted text.

Excellent!

Reclaiming Fair Use

Filed under: Fair Use — Patrick Durusau @ 3:42 pm

Reclaiming Fair Use: How to Put Balance Back in Copyright by Patricia Aufderheide and Peter Jaszi.

The irony of the title is that no electronic version of the book is available and the online scans are of a very limited part of the content.

Possibly a “do as I say and don’t do as I do” sort of book.

I first saw this in a tweet by Michael Peter Edson.

November 14, 2013

Fair Use Prevails!

Filed under: Authoring Topic Maps,Fair Use,Topic Maps — Patrick Durusau @ 11:46 am

Google wins book-scanning case: judge finds “fair use,” cites many benefits by Jeff John Roberts.

From the post:

Google has won a resounding victory in its eight-year copyright battle with the Authors Guild over the search giant’s controversial decision to scan more than 20 million library and make the available on the internet.

In a ruling issued Thursday morning in New York, US Circuit Judge Denny Chin said the book scanning amounted to fair use because it was “highly transformative” and because it didn’t harm the market for the original work.

“Google Books provides significant public benefits,” writes Chin, describing it as “an essential research tool” and noting that the scanning service has expanded literary access for the blind and helped preserve the text of old books from physical decay.

Chin also rejected the theory that Google was depriving authors of income, noting that the company does not sell the scans or make whole copies of books available. He concluded, instead, that Google Books served to help readers discover new books and amounted to “new income from authors.”

Excellent!

In case you are interested in “why” Google prevailed: The Authors Guild, Inc., et. al. vs. Google, Inc..

Sets an important precedent for topic maps that extract small portions of print or electronic works for presentation to users.

Especially works that sit on library shelves, waiting for their copyright imprisonment to end.

Powered by WordPress