Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

December 7, 2016

Facebook Patents Tool To Think For You

Filed under: Censorship,Facebook,Free Speech,News — Patrick Durusau @ 4:40 pm

My apologies but Facebook thinks you are too stupid to detect “fake news.” Facebook will compensate for your stupidity with a process submitted for a US patent. For free!

Facebook is patenting a tool that could help automate removal of fake news by Casey Newton.

From the post:

As Facebook works on new tools to stop the spread of misinformation on its network, it’s seeking to patent technology that could be used for that purpose. This month the US Trademark and Patent Office published Facebook’s application for Patent 0350675: “systems and methods to identify objectionable content.” The application, which was filed in June 2015, describes a sophisticated system for identifying inappropriate text and images and removing them from the network.

As described in the application, the primary purpose of the tool is to improve the detection of pornography, hate speech, and bullying. But last month, Zuckerberg highlighted the need for “better technical systems to detect what people will flag as false before they do it themselves.” The patent published Thursday, which is still pending approval, offers some ideas for how such a system could work.

A Facebook spokeswoman said the company often seeks patents for technology that it never implements, and said this patent should not be taken as an indication of the company’s future plans. The spokeswoman declined to comment on whether it was now in use.

The system described in the application is largely consistent with Facebook’s own descriptions of how it currently handles objectionable content. But it also adds a layer of machine learning to make reporting bad posts more efficient, and to help the system learn common markers of objectionable content over time — tools that sound similar to the anticipatory flagging that Zuckerberg says is needed to combat fake news.

If you substitute “user” for “administrator” where it appears in the text, Facebook would be enabling users to police the content they view.

Why Facebook finds users making decisions about the content they view objectionable isn’t clear. Suggestions on that question?

The process doesn’t appear to be either accountable and/or transparent.

If I can’t see the content that is removed by Facebook, how do I make judgments about why it was removed and/or how that compares to content about to be uploaded to Facebook?

Urge Facebook users to demand empowering them to make decisions about the content they view.

Urge Facebook shareholders to pressure management to abandon this quixotic quest to be an internet censor.

No Comments

No comments yet.

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.

Powered by WordPress