Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

May 19, 2016

Thoughts On How-To Help Drown A Copyright Troll?

Filed under: Intellectual Property (IP) — Patrick Durusau @ 6:33 pm

Copyright Trolls Rightscorp Are Teetering On The Verge Of Bankruptcy riff on (arstechnica.com).

Suggestions?

Think of it as a service to the entire community, including legitimate claimants to intellectual property.

I tried to think of any methods I would exclude and came up empty.

You?

May 18, 2016

Colleges Shouldn’t Have to Deal With Copyright Monitoring [Broods of Copyright Vipers]

Filed under: Fair Use,Intellectual Property (IP) — Patrick Durusau @ 3:20 pm

Colleges Shouldn’t Have to Deal With Copyright Monitoring by Pamela Samuelson.

From the post:

Colleges have a big stake in the outcome of the lawsuit that three publishers, Cambridge University Press, Oxford University Press, and Sage Publications, brought against Georgia State University officials for copyright infringement. The lawsuit, now in its eighth year, challenged GSU’s policy that allowed faculty members to upload excerpts (mainly chapters) of in-copyright books for students to read and download from online course repositories.

Four years ago, a trial court held that 70 of the 75 challenged uses were fair uses. Two years ago, an appellate court sent the case back for a reassessment under a revised fair-use standard. The trial court has just recently ruled that of the 48 claims remaining in the case, only four uses, each involving multiple chapters, infringed. The question now is, What should be the remedy for those four infringements?

Sage was the only publisher that prevailed at all, and it lost more infringement claims than it won. Cambridge and Oxford came away empty-handed. Despite the narrowness of Sage’s win, all three publishers have asked the court for a permanent injunction that would impose many new duties on GSU and require close monitoring of all faculty uploads to online course repositories.

I expected better out of Cambridge and Oxford, especially Cambridge, which has in recent years allowed free electronic access to some printed textbooks.

Sage and the losing publishers, Cambridge and Oxford, seek to chill the exercise of fair use by not only Georgia State University but universities everywhere.

Pamela details the outrageous nature of the demands made by the publishers and concludes that she is rooting for GSU on appeal.

We should all root for GSU on appeal but that seems so unsatisfying.

It does nothing to darken the day for the broods of copyright vipers at Cambridge, Oxford or Sage.

In addition to creating this money pit for their publishers, the copyright vipers want to pad their nests by:


As if that were not enough, the publishers want the court to require GSU to provide them with access to the university’s online course system and to relevant records so the publishers could confirm that the university had complied with the record-keeping and monitoring obligations. The publishers have asked the court to retain jurisdiction so that they could later ask it to reopen and modify the court order concerning GSU compliance measures.

I don’t know how familiar you are with academic publishing but every academic publisher has a copyright department that shares physical space with acquisitions and publishing.

Whereas acquisitions and publishing are concerned with collection and dissemination of knowledge, while recovering enough profit to remain viable, the copyright department could just as well by employed by Screw.

Expanding the employment rolls of copyright departments to monitor fair use by publishers is another drain on their respective publishers.

If you need proof of copyright departments being a dead loss for their publishers, consider the most recent annual reports for Cambridge and Oxford.

Does either one highlight their copyright departments as centers of exciting development and income? Do they tout this eight year long battle against fair use?

No? I didn’t think so but wanted your confirmation to be sure.

I can point you to a history of Sage, but as a privately held publisher, it has no public annual report. Even that history, over changing economic times in publishing, finds no space to extol its copyright vipers and their role in the GSU case.

Beyond rooting for GSU, work with the acquisitions and publication departments at Cambridge, Oxford and Sage, to help improve their bottom line profit and drown their respective broods of copyright vipers.

How?

Before you sign a publishing agreement, ask your publisher for a verified statement of the ROI contributed by their copyright office.

If enough of us ask, the question will resonant across the academic publishing community.

May 6, 2016

Elsevier – “…the law is a ass- a idiot.”

Filed under: Intellectual Property (IP),Law — Patrick Durusau @ 2:07 pm

Elsevier Complaint Shuts Down SCI-HUB Domain Name by Ernesto.

From the post:


However, as part of the injunction Elsevier is able to request domain name registrars to suspend Sci-Hub’s domain names. This happened to the original .org domain earlier, and a few days ago the Chinese registrar Now.cn appears to have done the same for Sci-hub.io.

The domain name has stopped resolving and is now listed as “reserved” according to the latest WHOIS info. TorrentFreak reached out to Sci-Hub founder Alexandra Elbakyan, who informed us that the registrar sent her a notice referring to a complaint from Elsevier.

In addition to the alternative domain names users can access the site directly through the IP-address 31.184.194.81, or its domain on the Tor-network, which is pretty much immune to any takedown efforts.

Meanwhile, academic pirates continue to flood to Sci-Hub, domain seizure or not.

The best response to Elsevier is found in Oliver Twist by Charles Dickens, Chapter 52

“If the law supposes that,” said Mr. Bumble, squeezing his hat emphatically in both hands, “the law is a ass- a idiot.”

I do disagree with Ernesto’s characterization of users of Sci-Hub as “academic pirates.”

Elsevier and others have fitted their business model to a system of laws that exploits the unpaid labor of academics, based on research funded by the public, profiting from sales to libraries and preventing wider access out of spite.

There is piracy going on in academic publishing but it isn’t on the part of those seeking to access published research.

Please share access points for Sci-Hub widely and often.

April 20, 2016

Indigo is the new Blue

Filed under: Intellectual Property (IP),Law — Patrick Durusau @ 8:21 pm

Letter from Carl Malamud to Mr. Michael Zuckerman, Harvard Law Review Association.

You can read Carl’s letter for yourself.

Recommend to law students, law professors, judges, lawyers, people practicing for Jeopardy appearances, etc., The Indigo Book: An Open and Compatible Implementation of A Uniform System of Citation.

In any pleadings, briefs, essays, cite this resource as:

Sprigman et al., The Indigo Book: A Manual of Legal Citation, Public Resource (2016).

Every download of the Indigo Book saves someone $25.89 over a competing work on Amazon, which I won’t name for copyright reasons.

April 1, 2016

Takedowns Hurt Free Expression

Filed under: Fair Use,Free Speech,Intellectual Property (IP) — Patrick Durusau @ 9:00 pm

EFF to Copyright Office: Improper Content Takedowns Hurt Online Free Expression.

From the post:

Content takedowns based on unfounded copyright claims are hurting online free expression, the Electronic Frontier Foundation (EFF) told the U.S. Copyright Office Friday, arguing that any reform of the Digital Millennium Copyright Act (DMCA) should focus on protecting Internet speech and creativity.

EFF’s written comments were filed as part of a series of studies on the effectiveness of the DMCA, begun by the Copyright Office this year. This round of public comments focuses on Section 512, which provides a notice-and-takedown process for addressing online copyright infringement, as well as “safe harbors” for Internet services that comply.

“One of the central questions of the study is whether the safe harbors are working as intended, and the answer is largely yes,” said EFF Legal Director Corynne McSherry. “The safe harbors were supposed to give rightsholders streamlined tools to police infringement, and give service providers clear rules so they could avoid liability for the potentially infringing acts of their users. Without those safe harbors, the Internet as we know it simply wouldn’t exist, and our ability to create, innovate, and share ideas would suffer.”

As EFF also notes in its comments, however, the notice-and-takedown process is often abused. A recent report found that the notice-and-takedown system is riddled with errors, misuse, and overreach, leaving much legal and legitimate content offline. EFF’s comments describe numerous examples of bad takedowns, including many that seemed based on automated content filters employed by the major online content sharing services. In Friday’s comments, EFF outlined parameters endorsed by many public interest groups to rein in filtering technologies and protect users from unfounded blocks and takedowns.

A must read whether you are interested in pursuing traditional relief or have more immediate consequences for rightsholders in mind.

Takedowns cry out for the application of data mining to identify the people who pursue takedowns, the use of takedowns, who benefits, to say nothing of the bots that are presently prowling the web looking for new victims.

I for one don’t imagine that rightsholders bots are better written than most government software (you did hear about State’s latest vulnerability?).

Sharpening your data skills on takedown data would benefit you and the public, which is being sorely abused at the moment.

March 29, 2016

Takedown Bots – Make It Personal

Filed under: Fair Use,Intellectual Property (IP) — Patrick Durusau @ 8:26 pm

Carl Malamud tweeted on 29 March 2016:

Hate takedown bots, both human and coded. If you’re going to accuse somebody of theft, you should make it personal.

in retweeting:

Mitch Stoltz
‏@mitchstoltz

How takedown-bots are censoring the web. https://www.washingtonpost.com/news/the-intersect/wp/2016/03/29/how-were-unwittingly-letting-robots-censor-the-web/ …

Carl has the right of it.

Users should make the use of take down notices very personal.

After all, illegitimate take down notices are thefts from the public domain and/or fair use.

Caitlin Dewey‘s How we’re unwittingly letting robots censor the Web is a great non-technical piece on the fuller report, Notice and Takedown in Everyday Practice.

Jennifer M. Urban, University of California, Berkeley – School of Law, Brianna L. Schofield, University of California, Berkeley – School of Law, and Joe Karaganis, Columbia University – The American Assembly, penned this abstract:

It has been nearly twenty years since section 512 of the Digital Millennium Copyright Act established the so-called notice and takedown process. Despite its importance to copyright holders, online service providers, and Internet speakers, very little empirical research has been done on how effective section 512 is for addressing copyright infringement, spurring online service provider development, or providing due process for notice targets.

This report includes three studies that draw back the curtain on notice and takedown:

1. using detailed surveys and interviews with more than three dozen respondents, the first study gathers information on how online service providers and rightsholders experience and practice notice and takedown on a day-to-day basis;

2. the second study examines a random sample from over 100 million notices generated during a six-month period to see who is sending notices, why, and whether they are valid takedown requests; and

3. the third study looks specifically at a subset of those notices that were sent to Google Image Search.

The findings suggest that whether notice and takedown “works” is highly dependent on who is using it and how it is practiced, though all respondents agreed that the Section 512 safe harbors remain fundamental to the online ecosystem. Perhaps surprisingly in light of large-scale online infringement, a large portion of OSPs still receive relatively few notices and process them by hand. For some major players, however, the scale of online infringement has led to automated, “bot”-based systems that leave little room for human review or discretion, and in a few cases notice and takedown has been abandoned in favor of techniques such as content filtering. The second and third studies revealed surprisingly high percentages of notices of questionable validity, with mistakes made by both “bots” and humans.

The findings strongly suggest that the notice and takedown system is important, under strain, and that there is no “one size fits all” approach to improving it. Based on the findings, we suggest a variety of reforms to law and practice.

At 160 pages it isn’t a quick or lite read.

The gist of both Caitlin’s post and the fuller report is that automated systems are increasingly being used to create and enforce take down requests.

Despite the margin of reported error, Caitlin notes:

Despite the margin of error, most major players seem to be trending away from human review. The next frontier in the online copyright wars is automated filtering: Many rights-holders have pressed for tools that, like YouTube’s Content ID, could automatically identify protected content and prevent it from ever publishing. They’ve also pushed for “staydown” measures that would keep content from being reposted once it’s been removed, a major complaint with the current system.

There is one source Caitlin uses:

…agreed to speak to The Post on condition of anonymity because he has received death threats over his work, said that while his company stresses accuracy and fairness, it’s impossible for seven employees to vet each of the 90,000 links their search spider finds each day. Instead, the algorithm classifies each link as questionable, probable or definite infringement, and humans only review the questionable ones before sending packets of takedown requests to social networks, search engines, file-hosting sites and other online platforms.

Copyright enforcers should discover their thefts from the public domain or infringement on fair use are on a par with car burglars or shoplifters.

What copyright enforcers lack is an incentive to err on the side of not issuing questionable take down notices.

If the consequences of illegitimate take down notices are high enough, they will spend the funds necessary to enforce only “legitimate” rights.

If you are interested in righteousness over effectiveness, by all means, pursue reform of “notice and takedown” in the copyright holder owned US Congress.

On the other hand, someone, more than a single someone, is responsible for honoring “notice and takedown” requests. Those someones also own members of Congress and can effectively seek changes that victims of illegitimate takedown requests cannot.

Imagine a leak from Yahoo! that outs those responsible for honoring “notice and takedown” requests.

Or the members of “Google’s Trusted Copyright Removal Program.” Besides “Glass.”

Or the takedown requests for YouTube.

Theft from the public cannot be sustained in the bright light of transparency.

March 8, 2016

Patent Sickness Spreads [Open Source Projects on Prior Art?]

Filed under: Intellectual Property (IP),Natural Language Processing,Patents,Searching — Patrick Durusau @ 7:31 pm

James Cook reports a new occurrence of patent sickness in Facebook has an idea for software that detects cool new slang before it goes mainstream.

The most helpful part of James’ post is the graphic outline of the “process” patented by Facebook:

facebook-patent

I sure do hope James has not patented that presentation because it make the Facebook patent, err, clear.

Quick show of hands on originality?

While researching this post, I ran across Open Source as Prior Art at the Linux Foundation. Are there other public projects that research and post prior art with regard to particular patents?

An armory of weapons for opposing ill-advised patents.

The Facebook patent is: 9,280,534 Hauser, et al. March 8, 2016, Generating a social glossary:

Its abstract:

Particular embodiments determine that a textual term is not associated with a known meaning. The textual term may be related to one or more users of the social-networking system. A determination is made as to whether the textual term should be added to a glossary. If so, then the textual term is added to the glossary. Information related to one or more textual terms in the glossary is provided to enhance auto-correction, provide predictive text input suggestions, or augment social graph data. Particular embodiments discover new textual terms by mining information, wherein the information was received from one or more users of the social-networking system, was generated for one or more users of the social-networking system, is marked as being associated with one or more users of the social-networking system, or includes an identifier for each of one or more users of the social-networking system. (emphasis in original)

February 22, 2016

U.S. Patents Requirements: Novel/Non-Obvious or Patent Fee?

Filed under: Intellectual Property (IP),Patents,Searching — Patrick Durusau @ 8:34 am

IBM brags about its ranking in patents granted, IBM First in Patents for 23rd Consecutive Year, and is particularly proud of patent 9087304, saying:

We’ve all been served up search results we weren’t sure about, whether they were for “the best tacos in town” or “how to tell if your dog has eaten chocolate.” With IBM Patent no. 9087304, you no longer have to second-guess the answers you’re given. This new tech helps cognitive machines find the best potential answers to your questions by thinking critically about the trustworthiness and accuracy of each source. Simply put, these machines can use their own judgment to separate the right information from wrong. (From: http://ibmblr.tumblr.com/post/139624929596/weve-all-been-served-up-search-results-we-werent

Did you notice that the 1st for 23 years post did not have a single link for any of the patents mentioned?

You would think IBM would be proud enough to link to its new patents and especially 9087304, that “…separate[s] right information from wrong.”

But if you follow the link for 9087304, you get an impression of one reason IBM didn’t include the link.

The abstract for 9087304 reads:

Method, computer program product, and system to perform an operation for a deep question answering system. The operation begins by computing a concept score for a first concept in a first case received by the deep question answering system, the concept score being based on a machine learning concept model for the first concept. The operation then excludes the first concept from consideration when analyzing a candidate answer and an item of supporting evidence to generate a response to the first case upon determining that the concept score does not exceed a predefined concept minimum weight threshold. The operation then increases a weight applied to the first concept when analyzing the candidate answer and the item of supporting evidence to generate the response to the first case when the concept score exceeds a predefined maximum weight threshold.

I will spare you further recitations from the patent.

Show of hands, do U.S. Patents always require:

  1. novel/non-obvious ideas
  2. patent fee
  3. #2 but not #1

?

Judge rankings by # of patents granted accordingly.

February 18, 2016

How Much Can paragraph -> subparagraph mean? Lots under TPP!

Filed under: Government,Intellectual Property (IP) — Patrick Durusau @ 7:55 pm

Sneaky Change to the TPP Drastically Extends Criminal Penalties by Jeremy Malcolm.

From the post:


What does this surreptitious change from “paragraph” to “subparagraph” mean? Well, in its original form the provision exempted a country from making available any of the criminal procedures and penalties listed above, except in circumstances where there was an impact on the copyright holder’s ability to exploit their work in the market.

In its revised form, the only criminal provision that a country is exempted from applying in those circumstances is the one to which the footnote is attached—namely, the ex officio action provision. Which means, under this amendment, all of the other criminal procedures and penalties must be available even if the infringement has absolutely no impact on the right holder’s ability to exploit their work in the market. The only enforcement provision that countries have the flexibility to withhold in such cases is the authority of state officials to take legal action into their own hands.

Sneaky, huh?

The United States Trade Representative (USTR) isn’t representing your interests or mine in the drafting of the TPP.

If you had any doubt in that regard, Jeremy’s post on this change and others should remove all doubt in that regard.

February 15, 2016

BMG Seeks to Violate Privacy Rights – Cox Refuses to Aid and Abet

Filed under: Cybersecurity,Intellectual Property (IP),Privacy,Security — Patrick Durusau @ 4:58 pm

Cox Refuses to Spy on Subscribers to Catch Pirates by Ernesto Van der Sar.

From the post:

Last December a Virginia federal jury ruled that Internet provider Cox Communications was responsible for the copyright infringements of its subscribers.

The ISP was found guilty of willful contributory copyright infringement and must pay music publisher BMG Rights Management $25 million in damages.

The verdict was a massive victory for the music company and a disaster for Cox, but the case is not closed yet.

A few weeks ago BMG asked the court to issue a permanent injunction against Cox Communications, requiring the Internet provider to terminate the accounts of pirating subscribers and share their details with the copyright holder.

In addition BMG wants the Internet provider to take further action to prevent infringements on its network. While the company remained vague on the specifics, it mentioned the option of using invasive deep packet inspection technology.

Last Friday, Cox filed a reply pointing out why BMG’s demands go too far, rejecting the suggestion of broad spying and account termination without due process.

“To the extent the injunction requires either termination or surveillance, it imposes undue hardships on Cox, both because the order is vague and because it imposes disproportionate, intrusive, and punitive measures against households and businesses with no due process,” Cox writes (pdf).

Read the rest of Ernesto’s post for sure but here’s a quick summary:

Cox.com is spending money to protect your privacy.

I don’t live in a Cox service area but if you do, sign up with Cox and say their opposition to BMG is driving your new subscription. Positive support always rings louder than protesters with signs and litter.

BMG.com is spending money to violate your privacy.

BMG is a subsidiary of Bertelsmann, which claims 112,037 employees.

I wonder how many of those employees have signed off on the overreaching and abusive positions of BMG?

Perhaps members of the public oppressed by BMG and/or Bertelsmann should seek them out to reason with them.

Bearing in mind that “rights” depend upon rules you choose to govern your discussions/actions.

February 3, 2016

Google Paywall Loophole Going Bye-Bye [Fair Use Driving Pay-Per-View Traffic]

Filed under: Cybersecurity,Fair Use,Intellectual Property (IP) — Patrick Durusau @ 2:51 pm

The Wall Street Journal tests closing the Google paywall loophole by Lucia Moses.

From the post:

The Wall Street Journal has long had a strict paywall — unless you simply copy and paste the headline into Google, a favored route for those not wanting to pony up $200 a year. Some users have noticed in recent days that the trick isn’t working.

A Journal spokesperson said the publisher was running a test to see if doing so would entice would-be subscribers to pay up. The rep wouldn’t elaborate on how long and extensive the experiment was and if permanently closing the loophole was a possible outcome.

“We are experimenting with a number of different trial mechanics at the moment to provide a better subscription taster for potential new customers,” the rep said. “We are a subscription site and we are always looking at better ways to optimize The Wall Street Journal experience for our members.”

The Wall Street Journal can deprive itself of the benefits of “fair use” if it wants to, but is that a sensible position?

Fair Use Benefits the Wall Street Journal

Rather than a total ban on copying, what if the amount of an article that can be copied is set by algorithm? Such that at a minimum, the first two or three paragraphs of any story can be copied, whether you arrive from Google or directly on the WSJ site.

Think about it. Wall Street Journal readers aren’t paying to skim the lead paragraphs in the WSJ. They are paying to see the full story and analysis in particular subject areas.

Bloggers, such as myself, cannot drive content seekers to the WSJ because the first sentence or two isn’t enough for readers to develop an interest in the WSJ report.

If I could quote the first 2 or 3 paragraphs, add in some commentary and perhaps other links, then a visitor to the WSJ is visiting to see the full content the Wall Street Journal has to offer.

The story lead is acting, as it should, to drive traffic to the Wall Street Journal, possibly from readers who won’t otherwise think of the Wall Street Journal. Some of my readers on non-American/European continents for example.

Bloggers Driving Readers to Wall Street Journal Pay-Per-View Content

By developing algorithmic fair use as I describe it would enlist an army of bloggers in spreading notice of pay-per-view content of the Wall Street Journal, at no expense to the Wall Street Journal. As a matter of fact, bloggers would be alerting readers of pay-per-view WSJ content, at the blogger’s own expense.

It may just be me but if someone were going to drive viewers to pay-per-view content on my site, at their own expense, with fair use of content, I would be insane to prevent that. But, I’m not the one grasping at dimes while $100 bills are flying overhead.

Close the Loophole, Open Up Fair Use

Full disclosure, I don’t have any evidence for fair use driving traffic to the Wall Street Journal because that evidence doesn’t exist. The Wall Street Journal would have to enable fair use and track appearance of fair use content and the traffic originating from it. Along with conversions from that additional traffic.

Straight forward data analytics but it won’t happen by itself. When the WSJ succeeds with such a model, you can be sure that other paywall publishers will be quick to follow suite.

Caveat: Yes, there will be people who will only ever consume the free use content. And your question? If they aren’t ever going to be paying customers and the same fair use is delivering paying customers, will you lose the latter in order to spite the former?

Isn’t that like cutting off your nose to spite your face?

Historical PS:

I once worked for a publisher that felt a “moral obligation,” their words, not mine, to prevent anyone from claiming a missing journal issue to which they might not be entitled. Yeah. Journal issues that were as popular as the Watchtower is among non-Jehovah’s Witnesses. Cost to the publisher, about $3.00 per issue, cost to verify entitlement, a full time position at the publisher.

I suspect claims ran less than 200 per year. My suggestion was to answer any request with thanks, here’s your missing copy. End of transaction. Track claims only to prevent abuse. Moral outrage followed.

Is morality the basis for your pay-per-view access policy? I thought pay-per-view was a way to make money.

Pass this post along to the WSJ if you know anyone there. Free suggestion. Perhaps they will be interested in other, non-free suggestions.

January 14, 2016

Can You Help With Important But Non-Visual Story? – The Blue People

Filed under: Intellectual Property (IP),Law — Patrick Durusau @ 4:34 pm

Accelerate Your Newsgathering and Verification reported a post that had 3 out of 5 newsgathering tools for images. But as I mention, there are important but non-visual stories that need improved tools for newsgathering and verification.

The copyright struggle between the Blue People and Carl Malamud is an important, but thus far, non-visual story.

Here’s the story in a nutshell:

Laws, court decisions, agency rulings, etc., that govern our daily lives, are found in complex document stores. They have complex citation systems to enable anyone to find a particular law, decision, or rule.

Those systems are the Dewey Decimal system or the Library of Congress classification, except several orders of magnitude more complex. And the systems vary from state to state, etc.

It’s important to get citations right, well, let’s let the BlueBook speak for itself:

The primary purpose of a citation is to facilitate finding and identifying the authority cited…. (A Uniform System of Citation, Tenth Edition, page iv.)

If you are going to quote a law or have access to it, you must have the correct citation.

In order to compel people to obey the law, they must have fair notice of it. And it stands to reason if you can’t find the law, no access to a citation guide, you are SOL as far as access to the law.

The courts come into the picture, being as lazy if not lazier than programmers, by referring to the “BlueBook” as the standard for citations. Courts could have written out their citation practices but as I said, courts are lazy.

Over time, the court’s enshrined their references to the “BlueBook” in court rules, which grants the “BlueBook” an informal monopoly on legal citations and access to the law.

As you have guessed by now, the Blue People, with their government created, unregulated monopoly, charge for the privilege of knowing how to find the law.

The Blue People are quite fond of their monopoly and are loathe to relinquish it. Even though a compilation of how statutes, regulations and courts decisions are cited in fact, is “sweat of the brow” work and not eligible for copyright protection.

A Possible Solution, Based on Capturing Public Facts

The answer to claims of copyright by the Blue People is to collect evidence of the citation practices in all fifty states and federal practice and publish such evidence along with advisory comments on usage.

Fifty law student/librarians could accomplish the task in parallel using modern search technologies and legal databases. Their findings would need to be collated but once done, every state plus federal practice, including nuances, would be easily accessible to anyone.

The courts, as practitioners of precedent,* will continue to support their self-created BlueBook monopoly.

But most judges will have difficulty distinguishing Holder, Attorney General, et al. v. Humanitarian Law Project et al. 561 U. S. 1 (2010) (following the BlueBook) and Holder, Attorney General, et al. v. Humanitarian Law Project et al. 561 U. S. 1 (2010) (following the U.S. Supreme Court and/or some recording of how cases are cited by the US Supreme Court).

If you are in the legal profession or aspire to be, don’t forget Jonathan Swift’s observation in Gulliver’s Travels:

It is a maxim among these lawyers that whatever has been done before, may legally be done again: and therefore they take special care to record all the decisions formerly made against common justice, and the general reason of mankind. These, under the name of precedents, they produce as authorities to justify the most iniquitous opinions; and the judges never fail of directing accordingly.

The inability of courts to distinguish between “BlueBook” and “non-BlueBook” citations will over time render their observance of precedent a nullity.

Not as satisfying as riding them and the Blue People down with war horns blowing but just as effective.

The Need For Visuals

If you have read this far, you obviously don’t need visuals to keep your interest in a story. Particularly a story about access to law and similarly exciting topics. It is an important topic, just not one that really gets your blood pumping.

How would you create visuals to promote public access the laws that govern our day-to-day lives?

I’m no artist but one thought would be to show people trying to consult law books that are chained shut by their citations. Or perhaps one or two of the identifiable Blue People as Jacob Marley type figures with bound law books and heavy chains about them?

The “…could have shared…might have shared…” lines would work well with access to legal materials.

Ping me with suggested images. Thanks!

January 12, 2016

Law as Pay-to-Play – ASTM International vs. Public.Resource.org, Inc.

Filed under: Government,Intellectual Property (IP),Law — Patrick Durusau @ 8:10 pm

Carl Malamud has been hitting Twitter hard today as he posts links to new materials in ASTM International vs. Public.Resource.org, Inc. (case docket).

The crux of the case is whether a legal authority, like the United States, can pass a law that requires citizens to buy materials from private organizations, in order to know what the law says.

That is a law will cite a standard, say by ASTM, and you are bound by the terms of that law, which aren’t clear unless you have a copy of a standard from ASTM. ASTM will be more than happy to sell you a copy.

It’s interesting that ASTM, which has reasonable membership fees of $75 a year, would be the lead plaintiff in this case.

There are technical committees associated with ANSI that have membership fees of $1,200 or more per year. And that is the lowest membership category.

I deeply enjoyed Carl’s tweet that described the ANSI amicus brief as “the sky is falling.”

No doubt from ANSI’s perspective, if Public.Resource.org, Inc. prevails, which it should under any sensible notice of the law reasoning, the sky will be falling.

ANSI and its kin profit by creating a closed club of well-heeled vendors who can pay for early access and participate in development of standards.

You have heard the term “white privilege?” In the briefs for ASTM and its friends, you will realize how deeply entrenched “corporate privilege” is in the United States. The ANSI brief is basically “this is how we do it and it works for us, go away.” No sense of other at all.

There is a running implication that standards organizations (SDOs) have to sell copies of standards to support standards activity. At least on a quick skim, I haven’t seen any documentation on that point. In fact, the W3C, which makes a large number of standards, seems to do ok giving standards away for free.

I can’t help but wonder how the presiding judge will react should a data leak from one of the plaintiffs prove that the “sale of standards” is entirely specious from a financial perspective. That is membership, the “pay-to-play,” is really the deciding factor.

That doesn’t strengthen or weaken the public notice of the law but I do think it is a good indication of the character of the plaintiffs and the lengths they are willing to go to preserve corporate privilege.

In case you are still guessing, I’m on the side of Public.Resource.org.

December 30, 2015

Bloggers! Help Defend The Public Domain – Prepare To Host/Repost “Baby Blue”

Filed under: Intellectual Property (IP),Open Access,Public Data — Patrick Durusau @ 11:36 am

Harvard Law Review Freaks Out, Sends Christmas Eve Threat Over Public Domain Citation Guide by Mike Masnick.

From the post:

In the fall of 2014, we wrote about a plan by public documents guru Carl Malamud and law professor Chris Sprigman, to create a public domain book for legal citations (stay with me, this isn’t as boring as it sounds!). For decades, the “standard” for legal citations has been “the Bluebook” put out by Harvard Law Review, and technically owned by four top law schools. Harvard Law Review insists that this standard of how people can cite stuff in legal documents is covered by copyright. This seems nuts for a variety of reasons. A citation standard is just an method for how to cite stuff. That shouldn’t be copyrightable. But the issue has created ridiculous flare-ups over the years, with the fight between the Bluebook and the open source citation tool Zotero representing just one ridiculous example.

In looking over all of this, Sprigman and Malamud realized that the folks behind the Bluebook had failed to renew the copyright properly on the 10th edition of the book, which was published in 1958, meaning that that version of the book was in the public domain. The current version is the 19th edition, but there is plenty of overlap from that earlier version. Given that, Malamud and Sprigman announced plans to make an alternative to the Bluebook called Baby Blue, which would make use of the public domain material from 1958 (and, I’d assume, some of their own updates — including, perhaps, citations that it appears the Bluebook copied from others).

As soon as “Baby Blue” drops, one expects the Harvard Law Review with its hired thugs Ropes & Gray to swing into action against Carl Malamud and Jon Sprigman.

What if the world of bloggers even those odds just a bit?

What if as soon as Baby Blue hits the streets, law bloggers, law librarian bloggers, free speech bloggers, open access bloggers, and any other bloggers all post Baby Blue to their sites and post it to file repositories?

I’m game.

Are you?

PS: If you think this sounds risky, ask yourself how much racial change would have happened in the South in the 1960’s if Martin Luther King had marched alone?

November 5, 2015

Trans-Pacific Partnership (full text)

Filed under: Government,Intellectual Property (IP),Law — Patrick Durusau @ 4:14 pm

Trans-Pacific Partnership (full text)

The Trans-Pacific Partnership text has been released!

Several of the sites I have tried were down due to traffic but this medium.com site appears to be holding up.

Be forewarned that this is a bizarre presentation of the text with promotional logos, etc.

A wave of commentary is sure to follow and within a few days I will collect up the best that is relevant to software/IP and post about it.

Just for grins, check your reading time against the suggested reading times by Medium. It rates the Intellectual Property chapter (18) at 106 minutes.

Hmmm, it might be possible to read it in 106 minutes but fully understanding what you have read is likely to take longer.

Enjoy!

October 16, 2015

Google Book-Scanning Project Is Fair Use, 2nd Circ. Says

Filed under: Intellectual Property (IP) — Patrick Durusau @ 10:35 am

Google Book-Scanning Project Is Fair Use, 2nd Circ. Says by Bill Donahue.

From the post:

Law360, New York (October 16, 2015, 10:13 AM ET) — The Second Circuit ruled Friday that Google Inc.’s project to digitize and index millions of copyrighted books without permission was legal under the fair use doctrine, handing the tech giant a huge victory in a long-running fight with authors.

Coming more than a decade after the Authors Guild first sued over what would become “Google Books,” the appeals court’s opinion said that making the world’s books text-searchable — while not allowing users to read more than a snippet of text — was a sufficiently “transformative use” of the author’s content to be protected by the doctrine.

“Google’s making of a digital copy to provide a search function is a transformative use, which augments public knowledge by making available information about plaintiffs’ books without providing the public with a substantial substitute for matter protected by the plaintiffs’ copyright interests in the original works or derivatives of them,” the appeals court said.

Excellent!

Spread the good news!

I will update with a link to the opinion.


Apologies for the delayed update!

The Authors Guild vs. Google, Docket No. 13-4829-cv

October 9, 2015

Confirmation of TPP = Death of Public Domain

Filed under: Government,Intellectual Property (IP) — Patrick Durusau @ 3:45 pm

Wikileaks has leaked TPP Treaty: Intellectual Property Rights Chapter – 5 October 2015.

View the leaked “TPP Treaty: Intellectual Property Rights Chapter, Consolidated Text” (PDF, HTML).

Much is objectionable in the “Intellectual Property Rights Chapter” of the Trans-Pacific Partnership (TPP), but nothing so pernicious as its attempt to destroy the public domain.

Extraordinary claim you say?

Consider the following:

Article QQ.H.2: {Presumptions}

1. In civil, criminal, and if applicable, administrative proceedings involving copyright or related rights, each Party shall provide:

(a) for a presumption 109 that, in the absence of proof to the contrary, the person whose name is indicated in the usual manner 110 as the author, performer, producer of the work, performance, or phonogram, or as applicable, the publisher is the designated right holder in such work, performance, or phonogram; and

(b) for a presumption that, in the absence of proof to the contrary, the copyright or related right subsists in such subject matter.

The public domain is made up of works that have been contributed to the public or on which copyright has expired. Anyone claiming copyright on such a work has the burden of proof.

If the TPP is confirmed, all those works in the public domain, the ones with the name of an author, performer, producer or publisher, are presumed to be under copyright.

If you are sued for quoting or distributing such a work, you have the burden of proving the work isn’t subject to copyright. That burden of proof will be at your expense.

The public domain is destroyed by a presumption hidden in section double Q, subsection H, subsection 2.

That’s not just my reading, check out: Copyright Presumptions and the Trans‐Pacific Partnership Agreement.

I haven’t seen an assault against the very notion of the public domain since the OASIS Rights Language TC.

The goal of the Rights Language TC was to create a content management system language required all content, free or not, to carry its header. And since the language wasn’t going to be free, you would be paying a tax to say your content was free or public domain. By default in the language.

Telcos could man the routers and prevent transmission of unlicensed content, i.e., without the header. The public domain was collateral damage in an effort to regulate transmission of content.

The Rights Language TC assault on the public domain failed.

Time to make the TPP assault on the public domain fail as well.

PS: Reach out to old friends, make new friends, activate your social networks. The problem is the Trans-Pacific Partnership, the solution is NO!

September 14, 2015

‘Dancing Baby’ Wins Copyright Case

Filed under: Fair Use,Intellectual Property (IP) — Patrick Durusau @ 9:00 pm

‘Dancing Baby’ Wins Copyright Case by Laura Wagner.

From the post:

A baby bobs up and down in a kitchen, as a Prince song plays in the background. His mother laughs in the background and his older sister zooms in and out of the frame.

This innocuous 29-second home video clip was posted to YouTube in 2007 and sparked a long legal proceeding on copyright and fair use law.

In the case, Lenz v. Universal — which has gained notoriety as the “dancing baby” lawsuit — Universal Music Group sent YouTube a warning to take the video down, claiming copyright infringement under the Digital Millennium Copyright Act. Then, Stephanie Lenz, poster of the video and mother of the baby, represented by Electronic Frontier Foundation, sued Universal for wrongly targeting lawful fair use.

Today, eight years later, a federal appeals court has sided with the dancing baby.

If you need more legal background on the issues, consider the EFF page on Lenz v. Universal (links to original court documents), or the Digital Media Law page, Universal Music v. Lenz.

The DMCA (Digital Millennium Copyright Act) should be amended to presume fair use unless and until the complaining party convince a court that someone else is profiting from the use of their property. No profit, no foul. No more non-judicial demand for take downs of any content.

September 10, 2015

Copyrighted Inkspots?

Filed under: Intellectual Property (IP) — Patrick Durusau @ 5:57 am

Hacker mag 2600 laughs off Getty Images inkspots copyright claim by Richard Chirgwin.

From the post:

Venerable hacker publication 2600 is fighting off what looks like an early candidate for the most egregious copyright infringement accusation of 2015.

On a 2012 cover, 2600 used an ink-splatter effect. A group naming itself the Trunk Archive – ultimately owned by Getty Images – is now playing the pay-up game because it’s got an image that also has an ink-splatter effect.

“We thought it was a joke for almost an entire day until one of us figured out that they were actually claiming our use of a small bit of ink splatter that was on one of their images was actionable”, the 2600 team wrote on Tuesday.

Richard discloses the source of the 2600 inkspots (not owned by Getty Images) and resources should you receive an “extortion” letter from Trunk Archive.

Copyright enforcement is a non-creative activity and distracts others from being creative. Odd outcome for a policy that alleges it encourages creativity.

May 27, 2015

U.S. sides with Oracle in Java copyright dispute with Google

Filed under: Intellectual Property (IP),Java — Patrick Durusau @ 10:52 am

U.S. sides with Oracle in Java copyright dispute with Google by John Ribeiro.

From the post:

The administration of President Barack Obama sided with Oracle in a dispute with Google on whether APIs, the specifications that let programs communicate with each other, are copyrightable.

Nothing about the API (application programming interface) code at issue in the case materially distinguishes it from other computer code, which is copyrightable, wrote Solicitor General Donald B. Verrilli in a filing in the U.S. Supreme Court.

The court had earlier asked for the government’s views in this controversial case, which has drawn the attention of scientists, digital rights group and the tech industry for its implications on current practices in developing software.

Although Google has raised important concerns about the effects that enforcing Oracle’s copyright could have on software development, those concerns are better addressed through a defense on grounds of fair use of copyrighted material, Verrilli wrote.

Neither the ScotusBlog case page, Google Inc. v. Oracle America, Inc., nor the Solitor General’s Supreme Court Brief page, as of May 27, 2015, has a copy of the Solicitor General’s brief.

I hesitate to comment on the Solicitor General’s brief sight unseen, as media reports on legal issues are always vague and frequently wrong.

Whatever Solicitor General Verrilli may or may not have said to one side, software interoperability should be the default, not something established by affirmative defenses. Public policy should encourage interoperability of software.

Consumers, large and small, should be aware that reduction of interoperability between software means higher costs for consumers. Something to keep in mind when you are looking for a vendor.

November 20, 2014

Senate Republicans are getting ready to declare war on patent trolls

Filed under: Intellectual Property (IP),Topic Maps — Patrick Durusau @ 7:00 pm

Senate Republicans are getting ready to declare war on patent trolls by Timothy B. Lee

From the post:

Republicans are about to take control of the US Senate. And when they do, one of the big items on their agenda will be the fight against patent trolls.

In a Wednesday speech on the Senate floor, Sen. Orrin Hatch (R-UT) outlined a proposal to stop abusive patent lawsuits. “Patent trolls – which are often shell companies that do not make or sell anything – are crippling innovation and growth across all sectors of our economy,” Hatch said.

Hatch, the longest-serving Republican in the US Senate , is far from the only Republican in Congress who is enthusiastic about patent reform. The incoming Republican chairmen of both the House and Senate Judiciary committees have signaled their support for patent legislation. And they largely see eye to eye with President Obama, who has also called for reform.

“We must improve the quality of patents issued by the U.S. Patent and Trademark Office,” Hatch said. “Low-quality patents are essential to a patent troll’s business model.” His speech was short on specifics here, but one approach he endorsed was better funding for the patent office. That, he argued, would allow “more and better-trained patent examiners, more complete libraries of prior art, and greater access to modern information technologies to address the agency’s growing needs.”

I would hate to agree with Senator Hatch on anything but there is no doubt that low-quality patents are rife at the U.S. Patent and Trademark Office. Whether patent trolls simply took advantage of the quality of patents or are responsible for low quality patents it’s hard to say.

In any event, the call for “…more complete libraries of prior art, and greater access to modern information technologies…” sounds like a business opportunity for topic maps.

After all, we all know that faster, more comprehensive search engines of the patent literature only gives you more material to review. It doesn’t give you more relevant material to review. Or give you material you did not know to look for. Only additional semantics has the power to accomplish either of those tasks.

There are those who will keep beating bags of words in hopes that semantics will appear.

Don’t be one of those. Choose an area of patents of interest and use interactive text mining to annotate existing terms with semantics (subject identity) which will reduce misses and increase the usefulness of “hits.”

That isn’t a recipe for mining all existing patents but who wants to do that? If you gain a large enough semantic advantage in genomics, semiconductors, etc., the start-up cost to catch up will be a tough nut to crack. Particularly since you are already selling a better product for a lower price than a start-up can match.

I first saw this in a tweet by Tim O’Reilly.

PS: A better solution for software patent trolls would be a Supreme Court ruling that eliminates all software patents. Then Congress could pass a software copyright bill that grants copyright status on published code for three (3) years, non-renewable. If that sounds harsh, consider the credibility impact of nineteen year old bugs.

If code had to be recast every three years and all vendors were on the same footing, there would be a commercial incentive for better software. Yes? If I had the coding advantages of a major vendor, I would start lobbying for three (3) year software copyrights tomorrow. Besides, it would make software piracy a lot easier to track.

September 24, 2014

Hewlett Foundation extends CC BY policy to all grantees

Filed under: Funding,Intellectual Property (IP) — Patrick Durusau @ 3:22 pm

Hewlett Foundation extends CC BY policy to all grantees by Timothy Vollmer.

From the post:

Last week the William and Flora Hewlett Foundation announced that it is extending its open licensing policy to require that all content (such as reports, videos, white papers) resulting from project grant funds be licensed under the most recent Creative Commons Attribution (CC BY) license. From the Foundation’s blog post: “We’re making this change because we believe that this kind of broad, open, and free sharing of ideas benefits not just the Hewlett Foundation, but also our grantees, and most important, the people their work is intended to help.” The change is explained in more detail on the foundation’s website.

The foundation had a long-standing policy requiring that recipients of its Open Educational Resources grants license the outputs of those grants; this was instrumental in the creation and growth of the OER field, which continues to flourish and spread. Earlier this year, the license requirement was extended to all Education Program grants, and as restated, the policy will now be rolled out to all project-based grants under any foundation program. The policy is straightforward: it requires that content produced pursuant to a grant be made easily available to the public, on the grantee’s website or otherwise, under the CC BY 4.0 license — unless there is some good reason to use a different license.

For a long time Creative Commons has been interested in promoting open licensing policies within philanthropic grantmaking. We received a grant from the Hewlett Foundation to survey the licensing policies of private foundations, and to work toward increasing the free availability of foundation-supported works. We wrote about the progress of the project in March, and we’ve been maintaining a spreadsheet of foundation IP policies, and a model IP policy.

We urge other foundations and funding bodies to emulate the outstanding leadership demonstrated by the William and Flora Hewlett Foundation and commit to making open licensing an essential component of their grantmaking strategy.

Not only is a wave of big data approaching but it will be more available than data has been at any time in history.

As funders require open access to funded content, arguments for restricted access will simply disappear from even the humanities.

If you want to change behavior, principled arguments won’t get you as far as changing the reward system.

September 19, 2014

Libraries may digitize books without permission, EU top court rules [Nation-wide Site Licenses?]

Filed under: Intellectual Property (IP),Library — Patrick Durusau @ 10:41 am

Libraries may digitize books without permission, EU top court rules by Loek Essers.

From the post:

European libraries may digitize books and make them available at electronic reading points without first gaining consent of the copyright holder, the highest European Union court ruled Thursday.

The Court of Justice of the European Union (CJEU) ruled in a case in which the Technical University of Darmstadt digitized a book published by German publishing house Eugen Ulmer in order to make it available at its electronic reading posts, but refused to license the publisher’s electronic textbooks.

A spot of good news to remember next on the next 9/11 anniversary. A Member State may authorise libraries to digitise, without the consent of the rightholders, books they hold in their collection so as to make them available at electronic reading points

Users can’t make copies onto a USB stick but under contemporary fictions about property rights represented in copyright statutes that isn’t surprising.

What is surprising is that nations have not yet stumbled upon the idea of nation-wide site licenses for digital materials.

A nation acquiring a site license the ACM Digital Library, IEEE, Springer and a dozen or so other resources/collections would have these positive impacts:

  1. Access to core computer science publications for everyone located in that nation
  2. Publishers would have one payor and could reduce/eliminate the staff that manage digital access subscriptions
  3. Universities and colleges would not require subscriptions nor the staff to manage those subscriptions (integration of those materials into collections would remain a library task)
  4. Simplify access software based on geographic IP location (fewer user/password issues)
  5. Universities and colleges could spend funds now dedicated to subscriptions for other materials
  6. Digitization of both periodical and monograph literature would be encouraged
  7. Avoids tiresome and not-likely-to-succeed arguments about balancing the public interest in IP rights discussions.

For me, #7 is the most important advantage of nation-wide licensing of digital materials. As you can tell by my reference to “contemporary fictions about property rights” I fall quite firmly on a particular side of the digital rights debate. However, I am more interested in gaining access to published materials for everyone than trying to convince others of the correctness of my position. Therefore, let’s adopt a new strategy: “Pay the man.”

As I outline above, there are obvious financial advantages to publishers from nation-wide site licenses, in the form of reduced internal costs, reduced infrastructure costs and a greater certainty in cash flow. There are advantages for the public as well as universities and colleges, so I would call that a win-win solution.

The Developing World Initiatives by Francis & Taylor is described as:

Taylor & Francis Group is committed to the widest distribution of its journals to non-profit institutions in developing countries. Through agreements with worldwide organisations, academics and researchers in more than 110 countries can access vital scholarly material, at greatly reduced or no cost.

Why limit access to materials to “non-profit institutions in developing countries?” Granting that the site-license fees for the United States would be higher than Liberia but the underlying principle is the same. The less you regulate access the simpler the delivery model and the higher the profit to the publisher. What publisher would object to that?

There are armies of clerks currently invested in the maintenance of one-off subscription models but the greater public interest in access to materials consistent with publisher IP rights should carry the day.

If Tim O’Reilly and friends are serious about changing access models to information, let’s use nation-wide site licenses to eliminate firewalls and make effective linking and transclusion a present day reality.

Publishers get paid, readers get access. It’s really that simple. Just on a larger scale than is usually discussed.

PS: Before anyone raises the issues of cost for national-wide site licenses, remember that the United States has spent more than $1 trillion in a “war” on terrorism that has made no progress in making the United States or its citizens more secure.

If the United Stated decided to pay Spinger Science+Business Media the €866m ($1113.31m) total revenue it made in 2012, for the cost of its ‘war” on terrorism, it could have purchased a site license to all Spinger Science+Business Media content for the entire United States for 898.47 years. (Check my math: 1,000,000,000,000 / 1,113,000,000 = 898.472.)

I first saw this in Nat Torkington’s Four short links: 15 September 2014.

August 14, 2014

Mo’ money, less scrutiny:

Filed under: Intellectual Property (IP) — Patrick Durusau @ 7:33 pm

Mo’ money, less scrutiny: Why higher-paid examiners grant worse patents by Derrick Harris.

From the post:

As people get better at their jobs, it’s logical to assume they’re able to perform their work more efficiently. However, a new study suggests that when it comes to issuing patents, there’s a point at which the higher expectations placed on promoted examiners actually become a detriment.

The study used resources from the National Center for Supercomputing Applications to analyze 1.4 million patent applications against a database of patent examiner records, measuring each examiner’s grant rate as they moved up the USPTO food chain. What the researchers found essentially, according to a University of Illinois News Bureau article highlighting the study, is:

“[A]s an examiner is given less time to review an application, they become less inclined to search for prior art, which, in turn, makes it less likely that the examiner makes a prior art-based rejection. In particular, ‘obviousness’ rejections, which are especially time-intensive, decrease.”

….

See Harris’ post for charts, details, etc.

Great to have scientific confirmation but every literate person knows the USPTO has been problematic for years. The real question, beyond the obvious need for intellectual property reform, is what to do with the USPTO?

Any solution that leaves the current leadership, staff, contractors, suppliers, etc., intact is doomed to fail. The culture of the present USPTO fostered this situation, which has festered for years. Charging the USPTO to change is Einstein’s definition of insanity:

Insanity: doing the same thing over and over again and expecting different results.

Start with a clean slate, including building new indices, technology and regulations and put an end to the mummer’s farce known as the current USPTO.

August 11, 2014

Patent Fraud, As In Patent Office Fraud

Filed under: Intellectual Property (IP),Topic Maps — Patrick Durusau @ 2:09 pm

Patent Office staff engaged in fraud and rushed exams, report says by Jeff John Roberts.

From the post:

…One version of the report also flags a culture of “end-loading” in which examiners “can go from unacceptable performance to award levels in one bi-week by doing 500% to more than 1000% of their production goal.”…

See Jeff’s post for other details and resources.

Assuming the records for patent examiners can be pried loose from the Patent Office, this would make a great topic map project. Associate the 500% periods with specific patents and further litigation on those patents, to create a resource for further attacks on patents approved by a particular examiner.

By the time a gravy train like patent examining makes the news, you know the train has already left the station.

On the up side, perhaps Congress will re-establish the Patent Office and prohibit any prior staff, contractors, etc. from working at the new Patent Office. The new Patent Office can adopt rules designed to enable innovation but also tracking prior innovation effectively. Present Patent Office goals have little to do with either of those goals.

August 10, 2014

Monkeys, Copyright and Clojure

Filed under: Clojure,Intellectual Property (IP) — Patrick Durusau @ 2:29 pm

Painting in Clojure by Tom Booth is a great post that walks you though using Clojure to become a digital Jackson Pollock. I think you will enjoy the post a lot and perhaps the output, assuming you appreciate that style of art. 😉

But I have a copyright question on which I need your advice. Tom included on the webpage a blank canvas and a button that reads: “Fill canvas.”

Here is a portion of the results of my pushing the button:

digital pollock

My question is: Does Tom Booth own the copyright to this image or do I?

You may have heard of the monkey taking a selfie:

monkey selfie

and the ensuing legal disputes, If a monkey takes a selfie in the forest, who owns the copyright? No one, says Wikimedia.

The Washington Post article quotes Wikimedia Foundation’s Chief Communications Officer Katherine Maher saying:

Monkeys don’t own copyrights.[…]” “What we found is that U.S. copyright law says that works that originate from a non-human source can’t claim copyright.

OK, but I can own a copyright and I did push the button, but Tom wrote the non-human source that created the image. So, who wins?

Yet another example of why intellectual property law reform, freeing it from its 18th century (and earlier) moorings is desperately needed.

The monkey copyright case is a good deal simpler. One alleged copyright infringer (Techdirt) responded to the claim in part saying:

David Slater, almost certainly did not have a claim, seeing as he did not take the photos, and even admits that the images were an accident from monkeys who found the camera (i.e., he has stated publicly that he did not “set up” the shot and let the monkeys take it).

David Slater, until most content owners (not content producers), is too honest for his own good. He has admitted he made no contribution to the photograph. No contribution = no copyright.

This story is going to end sadly. Slater says he is in debt, yet is seeking legal counsel in the United States. Remember the definition of “conflict of interest” in the United States:

lawyer fees

😉

OK, time to get back to work and go through Tom’s Clojure post. It really is very good.

March 9, 2014

Getty – 35 Million Free Images

Filed under: Image Understanding,Intellectual Property (IP) — Patrick Durusau @ 3:40 pm

Getty Images makes 35 million images free in fight against copyright infringement by Olivier Laurent.

From the post:

Getty Images has single-handedly redefined the entire photography market with the launch of a new embedding feature that will make more than 35 million images freely available to anyone for non-commercial usage. BJP’s Olivier Laurent finds out more.

(skipped image)

The controversial move is set to draw professional photographers’ ire at a time when the stock photography market is marred by low prices and under attack from new mobile photography players. Yet, Getty Images defends the move, arguing that it’s not strong enough to control how the Internet has developed and, with it, users’ online behaviours.

“We’re really starting to see the extent of online infringement,” says Craig Peters, senior vice president of business development, content and marketing at Getty Images. “In essence, everybody today is a publisher thanks to social media and self-publishing platforms. And it’s incredibly easy to find content online and simply right-click to utilise it.”

In the past few years, Getty Images found that its content was “incredibly used” in this manner online, says Peters. “And it’s not used with a watermark; instead it’s typically found on one of our valid licensing customers’ websites or through an image search. What we’re finding is that the vast majority of infringement in this space happen with self publishers who typically don’t know anything about copyright and licensing, and who simply don’t have any budget to support their content needs.”

To solve this problem, Getty Images has chosen an unconventional strategy. “We’re launching the ability to embed our images freely for non-commercial use online,” Peters explains. In essence, anyone will be able to visit Getty Images’ library of content, select an image and copy an embed HTML code to use that image on their own websites. Getty Images will serve the image in a embedded player – very much like YouTube currently does with its videos – which will include the full copyright information and a link back to the image’s dedicated licensing page on the Getty Images website.

More than 35 million images from Getty Images’ news, sports, entertainment and stock collections, as well as its archives, will be available for embedding from 06 March.

What a clever move by Getty!

Think about it. Who do you sue for copyright infringement? Is it some hobbyist blogger or use of an image in a school newspaper? OK, the RIAA would but what about sane people?

Your first question: Did the infringement result is a substantial profit due to the infringement?

Your second question: Does the guilty party have enough assets to likely recover the substantial profit?

You only want to catch infringement by other major for profit players.

All of who have to publicly use your images. Hiding infringement isn’t possible.

None of the major media outlets or publishers are going to cheat on use of your images. Whether that is because they are honest with regard to IP or so easily caught, doesn’t really matter.

In one fell swoop, Getty has secured for itself free advertising for every image that is used for free. Advertising it could not have bought for any sum of money.

Makes me wonder when the ACM, IEEE, Springer, Elsevier and others are going to realize that free and public access to their journals and monographs will drive demand for libraries to have enhanced access to those publications?

It isn’t like EBSCO and the others are going to start using data that is limited to non-commercial use for their databases. That would be too obvious, not to mention incurring significant legal liability.

Ditto for libraries. Libraries want legitimate access to the materials they provide and/or host.

As I told an academic society once upon a time, “It’s time to stop grubbing for pennies when there are $100 bills blowing over head.” It involve a replacement of “lost in the mail” journals. At a replacement cost of $3.50 (plus postage) per claim, they were employing a full time person to research eligibility to request a replacement copy. For a time I convinced them to simply replace upon request in the mailroom. Track requests but just do it. Worked quite well.

Over the years management has changed and I suspect they have returned to protecting the rights of members that only people entitled to a copy of the journal got one. I kid you not, that was the explanation for the old policy. Bizarre.

I first saw this at: Getty Set 35 Million Images Free, But Who Can Use Them? by David Godsall.

PS: The thought does occur to me that suitable annotations could be prepared ahead of time for these images so that when a for-profit publisher purchases the rights to a Getty image, someone could offer robust metadata to accompany the image.

February 5, 2014

Patent Search and Analysis Tools

Filed under: Intellectual Property (IP),Patents,Searching — Patrick Durusau @ 2:54 pm

Free and Low Cost Patent Search and Analysis Tools: Who Needs Expensive Name Brand Products? by Jackie Hutter.

From the post:

In private conversations, some of my corporate peers inform me that they pay $1000′s per year (or even per quarter for larger companies) for access to “name brand” patent search tools that nonetheless do not contain accurate and up to date information. For example, a client tells me that one of these expensive tools fails to update USPTO records on a portfolio her company is monitoring and that the PAIR data is more than 1 year out of date. This limits the effectiveness of the expensive database by requiring her IP support staff to check each individual record on a regular basis to update the data. Of course, this limitation defeats the purpose of spending the big bucks to engage with a “name brand” search tool.

Certainly, one need not have sympathy for corporate IP professionals who manage large department budgets–if they spend needlessly on “name brand” tools and staff to manage the quality of such tools, so be it. But most companies with IP strategy needs do not have money and staff to purchase such tools, let alone to fix the errors in the datasets obtained from them. Others might wish not to waste their department budgets on worthless tools. To this end, over the last 5 years, I have used a number of free and low cost tools in my IP strategy practice. I use all of these tools on a regular basis and have personally validated the quality and validity of each one for my practice.
….

Jackie makes two cases:

First, there are free tools that perform as well or better than commercial patent tools. A link is offered to a list of them.

Second, and more importantly from my perspective, is the low cost tools leave a lot to be desired in terms of UI and usability.

Certainly enough room for an “inexpensive” but better than commercial-grade patent search service to establish a market.

Or perhaps a more expensive “challenge” tool that warns subscribers about patents close to theirs.

I first saw this in a tweet by Lutz Maicher.

September 22, 2013

…Introducing … Infringing Content Online

Filed under: Intellectual Property (IP),Search Engines,Searching — Patrick Durusau @ 12:43 pm

New Study Finds Search Engines Play Critical Role in Introducing Audiences To Infringing Content Online

From the summary at Full Text Reports:

Today, MPAA Chairman Senator Chris Dodd joined Representatives Howard Coble, Adam Schiff, Marsha Blackburn and Judy Chu on Capitol Hill to release the results of a new study that found that search engines play a significant role in introducing audiences to infringing movies and TV shows online. Infringing content is a TV show or movie that has been stolen and illegally distributed online without any compensation to the show or film’s owner.

The study found that search is a major gateway to the initial discovery of infringing content online, even in cases when the consumer was not looking for infringing content. 74% of consumers surveyed cited using a search engine as a navigational tool the first time they arrived at a site with infringing content. And the majority of searches (58%) that led to infringing content contained only general keywords — such as the titles of recent films or TV shows, or phrases related to watching films or TV online — and not specific keywords aimed at finding illegitimate content.

I rag on search engines fairly often about the quality of their results so in light of this report, I wanted to give them a shout out of: Well done!

They may not be good at the sophisticated content discovery that I find useful, but on the other hand, when sweat hogs are looking for entertainment, search content can fill the bill.

On the other hand, knowing that infringing content can be found may be good for PR purposes but not much more. Search results don’t capture (read identify) enough subjects to enable the mining of patterns of infringement and other data analysis relevant to opposing to infringement.

Infringing content is easy to find so the business case for topic maps lies with content providers. Who need more detail (read subjects and associations) than a search engine can provide.

New Study Finds Search Engines Play Critical Role in Introducing Audiences To Infringing Content Online (PDF of the news release)


Update: Understanding the Role of Search in Online Piracy. The full report. Additional detail but no links to the data.

February 14, 2013

Intellectual Property Rights: Fiscal Year 2012 Seizure Statistics

Filed under: Government,Intellectual Property (IP),Transparency — Patrick Durusau @ 7:51 pm

Intellectual Property Rights: Fiscal Year 2012 Seizure Statistics

Fulltextreports.com quotes this report as saying:

In Fiscal Year (FY) 2012, DHS and its agencies, CBP and ICE, remained vigilant in their commitment to protect American consumers from intellectual property theft as well as enforce the rights of intellectual property rights holders by expanding their efforts to seize infringing goods, leading to 691 arrests, 423 indictments and 334 prosecutions. Counterfeit and pirated goods pose a serious threat to America’s economic vitality, the health and safety of American consumers, and our critical infrastructure and national security. Through coordinated efforts to interdict infringing merchandise, including joint operations, DHS enforced intellectual property rights while facilitating the secure flow of legitimate trade and travel.

I just feel so…. underwhelmed.

When was the last time you felt frightened by a fake French handbag? Or imitation Italian shoes?

I mean, they may be ugly but so were the originals.

I mention this because tracking data across the various intellectual property enforcement agencies isn’t straight forward.

I found that out while looking into some historical data on copyright enforcement. After the Aaron Swartz tragedy.

The question I want to pursue with topic maps is: Who benefits from these government enforcement efforts?

As far as I can tell now, today, I never have. I bet the same is true for you.

More on gathering the information to make that case anon.

« Newer PostsOlder Posts »

Powered by WordPress