Archive for June, 2016

Ferengi Rules of Acquisition

Thursday, June 30th, 2016

Ferengi Rules of Acquisition

From the webpage:

The Ferengi Rules of Acquisition are a collection of two hundred and eighty-five sayings that form the basis of Ferengi philosophy.

The jury verdict against Oracle and in favor of HP for $3 billion reminded me of rule #8:

“Small print leads to large risk.”

You would think Ellison could recite them from memory by now. 😉

Secret FBI National Security Letter (NSL) Attacks on Reporters – Safe Leaking?

Thursday, June 30th, 2016

Secret Rules Make It Pretty Easy For The FBI To Spy On Journalists by Cora Currier.

For those of us who suffer from reflexive American exceptionalism, that press censorship happens “over there,” Cora’s story is a sobering read.

From the post:

Secret FBI rules allow agents to obtain journalists’ phone records with approval from two internal officials — far less oversight than under normal judicial procedures.

The classified rules, obtained by The Intercept and dating from 2013, govern the FBI’s use of National Security Letters, which allow the bureau to obtain information about journalists’ calls without going to a judge or informing the news organization being targeted. They have previously been released only in heavily redacted form.

Media advocates said the documents show that the FBI imposes few constraints on itself when it bypasses the requirement to go to court and obtain subpoenas or search warrants before accessing journalists’ information.

Cora goes on to point out that the FBI issued nearly 13,000 NSLs in 2015.

After great coverage on the FBI and its use of NSLs, Cora concludes:


For Brown, of the Reporters Committee, the disclosure of the rules “only confirms that we need information about the actual frequency and context of NSL practice relating to newsgathering and journalists’ records to assess the effectiveness of the new guidelines.”

That’s the root of the problem isn’t it?

Lack of information on how NSLs are being used against journalists in fact.

Care to comment on the odds of getting an accurate accounting of the FBI’s war on journalists from the FBI?

No? I thought not.

So how can that data be gathered?

Question for discussion (NOT legal advice)

In 2005, the non-disclosure requirements for NSLs were modified to read:

18 U.S. Code § 2709 – Counterintelligence access to telephone toll and transactional records

(2) Exception.—

(A)In general.—A wire or electronic communication service provider that receives a request under subsection (b), or officer, employee, or agent thereof, may disclose information otherwise subject to any applicable nondisclosure requirement to—

(i) those persons to whom disclosure is necessary in order to comply with the request;

(ii) an attorney in order to obtain legal advice or assistance regarding the request; or

(iii) other persons as permitted by the Director of the Federal Bureau of Investigation or the designee of the Director.

Each person in the chain of disclosure has to be advised of the requirement to keep the NSL secret.

Unless the law has changed more radically than I imagine, the burden of proving a criminal offense still rests with the government.

If I am served with an NSL and I employ one or more attorneys, who have assistants working on my case, and the NSL is leaked to a public site, it remains the government’s burden to prove who leaked the NSL.

The government cannot force the innocent in the chain of disclosure to exculpate themselves and leave only the guilty party to face justice. The innocence can remain mute, as is the privilege of every criminal defendant.

Is that a fair statement?

If so, how many brave defendants are necessary in the chain of disclosure per NSL?

As Jan says in Twitter and the Monkey Man:

“It was you to me who taught
In Jersey anything’s legal, as long as you don’t get caught”

If that sounds anarchistic, remember the government chose to abandon the Constitution, first. If it wants respect for law, it should respect the Constitution.

TUGBoat – The Complete Set

Thursday, June 30th, 2016

Norm Walsh tweeted an offer of circa 1990 issues of TUGBoat for free to a good home today (30 June 2016).

On the off chance that you, like me, have only a partial set, consider the full set, TUGBoat Contents, 1980 1:1 to date.

From the TUGBoat homepage:

The TUGboat journal is a unique benefit of joining TUG. It is currently published three times a year and distributed to all TUG members (for that year). Anyone can also buy copies from the TUG store.

We post articles online after about one year for the benefit of the entire TeX community, but TUGboat is funded by member support. So please consider joining TUG if you find TUGboat useful.

TUGboat publishes the proceedings of the TUG Annual Meetings, and sometimes other conferences. A list of other publications by TUG, and by other user groups is available.

This is an opportunity to support the TeX Users Group (TUG) without looking for a future home for your printed copies of TUGBoat. Donate to TUG and read online!

Enjoy!

GPU + Russian Algorithm Bests Supercomputer

Thursday, June 30th, 2016

No need for supercomputers

From the post:


Senior researchers Vladimir Pomerantcev and Olga Rubtsova, working under the guidance of Professor Vladimir Kukulin (SINP MSU), were able to use on an ordinary desktop PC with GPU to solve complicated integral equations of quantum mechanics — previously solved only with the powerful, expensive supercomputers. According to Vladimir Kukulin, the personal computer does the job much faster: in 15 minutes it is doing the work requiring normally 2-3 days of the supercomputer time.

The main problem in solving the scattering equations of multiple quantum particles was the calculation of the integral kernel — a huge two-dimensional table, consisting of tens or hundreds of thousands of rows and columns, with each element of such a huge matrix being the result of extremely complex calculations. But this table appeared to look like a monitor screen with tens of billions of pixels, and with a good GPU it was quite possible to calculate all of these. Using the software developed in Nvidia and having written their own programs, the researchers split their calculations on the many thousands of streams and were able to solve the problem brilliantly.

“We reached the speed we couldn’t even dream of,” Vladimir Kukulin said. “The program computes 260 million of complex double integrals on a desktop computer within three seconds only. No comparison with supercomputers! My colleague from the University of Bochum in Germany (recently deceased, mournfully), whose lab did the same, carried out the calculations by one of the largest supercomputers in Germany with the famous blue gene architecture that is actually very expensive. And what his group is seeking for two or three days, we do in 15 minutes without spending a dime.”

The most amazing thing is that the desired quality of graphics processors and a huge amount of software to them exist for ten years already, but no one used them for such calculations, preferring supercomputers. Anyway, our physicists surprised their Western counterparts pretty much.

One of the principal beneficiaries of the US restricting the export of the latest generation of computer technology to the former USSR, was of course Russia.

Deprived of the latest hardware, Russian mathematicians and computer scientists were forced to be more efficient with equipment that was one or two generations off the latest mark for computing.

Parity between the USSR and the USA in nuclear weapons is testimony to their success and the failure of US export restriction policies.

For the technical details: V.N. Pomerantsev, V.I. Kukulin, O.A. Rubtsova, S.K. Sakhiev. Fast GPU-based calculations in few-body quantum scattering. Computer Physics Communications, 2016; 204: 121 DOI: 10.1016/j.cpc.2016.03.018.

Will a GPU help you startle your colleagues in the near future?

Curse of Dimensionality Explained

Thursday, June 30th, 2016

Curse of Dimensionality Explained by Nikolay Manchev.

Nikolay uses the following illustration:

curse-dimensions-460

And follows with (in part):


The curse of dimensionality – as the number of dimensions increases, the number of regions grows exponentially.

This means we have to use 8’000 observations in three-dimensional space to get the same density as we would get from 20 observations in a one-dimensional space.

This illustrates one of the key effects of the curse of dimensionality – as dimensionality increases the data becomes sparse. We need to gather more observations in order to present the classification algorithm with a good space coverage. If we, however, keep increasing the number of dimensions, the number of required observations quickly goes beyond what we can hope to gather.

See Nikolay’s post for more details but I thought the illustration of sparsity induced by dimensions was worth repeating.

World-Check Database Leak Teaser

Thursday, June 30th, 2016

Chris Vickery posted to Reddit: Terrorism Blacklist: I have a copy. Should it be shared?, which reads in part as follows:

…A few years ago, Thomson Reuters purchased a company for $530 million. Part of this deal included a global database of “heightened-risk individuals” called World-Check that Thomson Reuters maintains to this day. According to Vice.com, World-Check is used by over 300 government and intelligence agencies, 49 of the 50 biggest banks, and 9 of the top 10 global law firms. The current-day version of the database contains, among other categories, a blacklist of 93,000 individuals suspected of having ties to terrorism.

I have obtained a copy of the World-Check database from mid-2014.

No hacking was involved in my acquisition of this data. I would call it more of a leak than anything, although not directly from Thomson Reuters. The exact details behind that can be shared at a later time.

This copy has over 2.2 million heightened-risk individuals and organizations in it. The terrorism category is only a small part of the database. Other categories consist of individuals suspected of being related to money laundering, organized crime, bribery, corruption, and other unsavory activities.

I am posting this message in order to ask, “Should I release this database to the world?”. I want your opinion.

Yeah, right.

Chris’s question: “Should I release this database to the world?,” was moot from the outset.

This is pandering for attention at its very worst.

Chris could have put all of us on par with $1 million subscribers to the World-Check database but chose attention for himself instead.

There are only three sources of data:

  • Clients – Confidential until the client says release it, even in the face of government pressure (just good professional ethics).
  • Contract – Limited to by the terms you used for access. If you don’t want to agree to the terms, find another means of access. (falls under the “don’t lie” principle, governments do enough of that for all of us)
  • Other – Should be shared as widely and often as possible.

The World-Check database clearly falls under “other” and should have been shared as widely as possible.

Thomas Reuters and similar entities survive not because of merit or performance, but because people like Chris compensate for their organizational and technical failures. The public interest is not being served by preservation of a less than stellar status quo.

Not to mention leaking the list would create marketing opportunities. The criminal defense bar comes to mind.

Don’t tease, leak!

Index on Censorship – 250th Issue – Subscribe!

Thursday, June 30th, 2016

Journalists under fire and under pressure: summer magazine 2016 by Vicky Baker.

From the post:

Index on Censorship has dedicated its milestone 250th issue to exploring the increasing threats to reporters worldwide. Its special report, Truth in Danger, Danger in Truth: Journalists Under Fire and Under Pressure, is out soon.

Highlights include Lindsey Hilsum, writing about her friend and colleague, the murdered war reporter Marie Colvin, and asking whether journalists should still be covering war zones. Stephen Grey looks at the difficulties of protecting sources in an era of mass surveillance. Valeria Costa-Kostritsky shows how Europe’s journalists are being silenced by accusations that their work threatens national security. Kaya Genç interviews Turkey’s threatened investigative journalists, and Steven Borowiec lifts the lid on the cosy relationships inside Japan’s press clubs. Plus, the inside track on what it is really like to be a local reporter in Syria and Eritrea.

Also in this issue: the late Swedish crime writer Henning Mankell explores colonialism in Africa in an exclusive play extract; Jemimah Steinfeld interviews China’s most famous political cartoonist; Irene Caselli writes about the controversies and censorship of Latin America’s soap operas; and Norwegian musician Moddi tells how hate mail sparked an album of music that had been silenced.

The 250th cover is by Ben Jennings. Plus there are cartoons and illustrations by Martin Rowson, Brian John Spencer, Sam Darlow and Chinese cartoonist Rebel Pepper.

You can order your copy here, or take out a digital subscription via Exact Editions. Copies are also available at the BFI, the Serpentine Gallery, MagCulture, (London), News from Nowhere (Liverpool), Home (Manchester) and on Amazon. Each magazine sale helps Index on Censorship continue its fight for free expression worldwide.

Index on Censorship magazine was started in 1972 and remains the only global magazine dedicated to free expression. It has produced 250 issues, with contributors including Samuel Beckett, Gabriel García Marquéz, Nadine Gordimer, Arthur Miller, Salman Rushdie, Margaret Atwood, and many more.

Sadly, there is no lack of volunteers for the role of censor.

There are the four horsemen of internet censorship, Facebook, Twitter, YouTube and Microsoft, attempting to curry favor with the EU by censoring content.

Other volunteers include Jonathan Weisman (The Times deputy Washington editor), Andrew Golis (founder and CEO of This.cm), and of course, Hillary Clinton, a long time censorship advocate. To mention only a few of them.

Despite the governments and other forces supporting censorship and the never ending nature of the war against censorship, mine is not the counsel of despair.

The war against censorship cannot be waged by front line fighters alone.

The Other End of the Spear: The Tooth-to-Tail Ratio (T3R) in Modern Military Operations by John J. McGrath (2012), summarized the ratio of combat to other troops in Iraq with this graphic:

iraq-military-support-460

Professional armies recognize the value of non-combat roles.

Do you?

Subscribe to Index on Censorship today!

PS: While we are talking about war, remember that professional military organizations study, practice and write about war. Stripped of the occasional ideological fluff, their publications can help you avoid any number of amateurish mistakes.

Computerworld’s advanced beginner’s guide to R

Wednesday, June 29th, 2016

Computerworld’s advanced beginner’s guide to R by David Smith.

From the post:

Many newcomers to R got their start learning the language with Computerworld’s Beginner’s Guide to R, a 6-part introduction to the basics of the language. Now, budding R users who want to take their skills to the next level have a new guide to help them: Computerword’s Advanced Beginner’s Guide to R. Written by Sharon Machlis, author of the prior Beginner’s guide and regular reporter of R news at Computerworld, this new 72-page guide dives into some trickier topics related to R: extracting data via API, data wrangling, and data visualization.

Well, what are you waiting for?

Either read it or pass it along!

Enjoy!

How Secure Are Emoji Ciphers?

Wednesday, June 29th, 2016

You Can Now Turn Messages Into Secret Code Using Emoji by Joon Ian Wong.

From the post:

Emoji are developing into their own language, albeit a sometimes impenetrable one. But they are about to become truly impenetrable. A new app from the Mozilla Foundation lets you use them for encryption.

The free web app, called Codemoji, lets users write a message in plain-text, then select an emoji “key” to mask the letters in that message with a series of emoji. To decrypt a message, the correct key must be entered in the app, turning emoji back into the alphabet.

Caesar ciphers (think letter substitution) are said to be “easy” to solve with modern computers.

Which is true, but the security of an Emoji cipher depends on how long the information must remain secret.

For example, you discover a smart phone at 11:00 AM (your local) and it has the following message:

Detonate at 12:15 P.M. (your local)

but that message is written in Emoji using the angry face as the key:

emoji-code

That Emoji coded message is as secure as a message encoded with the best the NSA can provide.

Why?

If you knew what the message said, detonation time, assuming that is today, is only 75 minutes away. Explosions are public events and knowing in hindsight that you had captured the timing message, but broke the code too late, isn’t all that useful.

The “value” of that message being kept secret expires at the same time as the explosion.

In addition to learning more about encryption, use Codemoji as a tool for thinking about your encryption requirements.

Some (conflicting) requirements: Ease of use, resistance to attack (how to keep the secret), volume of use, hardware/software requirements, etc.

Everyone would like to have brain-dead easy to use, impervious to even alien-origin quantum computers, scales linearly and runs on an Apple watch.

Not even the NSA is rumored to have such a system. Become informed so you can make informed compromises.

Buffoons A Threat To Cartoonists?

Wednesday, June 29th, 2016

How social media has changed the landscape for editorial cartooning by Ann Telnaes.

At the center of the social media outrage that Ann describes was her cartoon:

ted-cruz-cartoon-460

I did not see the original Washington Post political attack ad featuring Cruz and his daughters, but the use of family as props is traditional American politics. I took Ann’s cartoon as criticism of that practice in general and Cruz’s use of it in particular.

Even more of a tradition in American politics, is the intellectually and morally dishonest failure to engage the issue at hand. Rather than responding to the criticism of his exploitation of his own children, Cruz attacked Ann as though she was the one at fault.

That should not have been unexpected, given Cruz’s party is responsible for the “Checkers” speech and other notable acts of national deception. (If you don’t know the “Checkers” speech, check it out. TV was just becoming a player in national politics, much like social media now.)

As you can tell, I think the response by Cruz and others was a deliberate distortion of the original cartoon and certainly the abuse heaped upon Ann was unjustified, but what I am missing is the threat posed by “social media lynch mobs?”

What if every buffoon on Fox, social media, etc., all took to social media to criticize Ann’s cartoon?

Certainly a waste of electricity and data packets, but so what? They are theirs to waste.

Ann’s fellow cartoonists recognized the absurdity of the criticism, as would any rational person familiar with American politics.

Ann suggests:


How should the journalism community protect cartoonists so they can do their jobs? We need to educate and be ready the next time a cartoonist aims his or her satire against a thin-skinned politician or interest group looking for an opportunity to manipulate fair criticism. Be aware when a false narrative is being presented to deflect the actual intent of a cartoon; talk to your editors and come up with a plan to counter the misinformation.

Sorry, what other than “false narratives” were you expecting? Shouldn’t we make that assumption at the outset and prepare to press forward with the “true narrative?”

Ann almost captures my approach when she says:

It has been said cartoonists are on the front lines of the war to defend free speech.

The war to defend free speech is quite real. If you doubt that, browse the pages of Index on Censorship.

Where I differ from Ann is that I don’t see the braying of every buffoon social media has to offer as a threat to free speech.

Better filters are the answer to buffoons on social media.

Slouching Towards Total Surveillance – Investigatory Powers Bill Update

Wednesday, June 29th, 2016

Investigatory Powers Bill 2015-16 to 2016-17.

Bill Summary:

A Bill to make provision about the interception of communications, equipment interference and the acquisition and retention of communications data, bulk personal datasets and other information; to make provision about the treatment of material held as a result of such interception, equipment interference or acquisition or retention; to establish the Investigatory Powers Commissioner and other Judicial Commissioners and make provision about them and other oversight arrangements; to make further provision about investigatory powers and national security; to amend sections 3 and 5 of the Intelligence Services Act 1994; and for connected purposes.

Whatever criticisms you may have of the UK Parliment, you must admit its delivery of legislative information is quite nice.

Via email today I received notice of “sitting” and “provisional sitting” on the Investigatory Powers Bill. A quick check of their glossary reveals that “sitting” is another term for committee meeting.

The first “sitting” or committee meeting on this bill will be 11.07.2016.

A process described on the homepage of this bill as:

Committee stage – line by line examination of the Bill – is scheduled to begin on 11 July.

Considering its progress so far, I’m not expecting “line by line examination” to impede its progress.

Still, it’s not, yet, a law so delay, diversion, dilution, remain possibilities.

The privacy you protect could well be your own.

The Feynman Technique – Contest for Balisage 2016?

Tuesday, June 28th, 2016

The Best Way to Learn Anything: The Feynman Technique by Shane Parrish.

From the post:

There are four simple steps to the Feynman Technique, which I’ll explain below:

  1. Choose a Concept
  2. Teach it to a Toddler
  3. Identify Gaps and Go Back to The Source Material
  4. Review and Simplify

This made me think of the late-breaking Balisage 2016 papers posted by Tommie Usdin in email:

  • Saxon-JS – XSLT 3.0 in the Browser, by Debbie Lockett and Michael Kay, Saxonica
  • A MicroXPath for MicroXML (AKA A New, Simpler Way of Looking at XML Data Content), by Uche Ogbuji, Zepheira
  • A catalog of Functional programming idioms in XQuery 3.1, James Fuller, MarkLogic

New contest for Balisage?

Pick a concept from a Balisage 2016 presentation and you have sixty (60) seconds to explain it to Balisage attendees.

What do you think?

Remember, you can’t play if you don’t attend! Register today!

If Tommie agrees, the winner gets me to record a voice mail greeting for their phone! 😉

Integrated R labs for high school students

Tuesday, June 28th, 2016

Integrated R labs for high school students by Amelia McNamara.

From the webpage:

Amelia McNamara, James Molyneux, Terri Johnson

This looks like a very promising approach for capturing the interests of high school students in statistics and R.

From the larger project, Mobilize, curriculum page:

Mobilize centers its curricula around participatory sensing campaigns in which students use their mobile devices to collect and share data about their communities and their lives, and to analyze these data to gain a greater understanding about their world.Mobilize breaks barriers by teaching students to apply concepts and practices from computer science and statistics in order to learn science and mathematics. Mobilize is dynamic: each class collects its own data, and each class has the opportunity to make unique discoveries. We use mobile devices not as gimmicks to capture students’ attention, but as legitimate tools that bring scientific enquiry into our everyday lives.

Mobilize comprises four key curricula: Introduction to Data Science (IDS), Algebra I, Biology, and Mobilize Prime, all focused on preparing students to live in a data-driven world. The Mobilize curricula are a unique blend of computational and statistical thinking subject matter content that teaches students to think critically about and with data. The Mobilize curricula utilize innovative mobile technology to enhance math and science classroom learning. Mobilize brings “Big Data” into the classroom in the form of participatory sensing, a hands-on method in which students use mobile devices to collect data about their lives and community, then use Mobilize Visualization tools to analyze and interpret the data.

I like the approach of having the student collect their own and process their own data. If they learn to question their own data and processes, hopefully they will ask questions about data processing results presented as “facts.” (Since 2016 is a presidential election year in the United States, questioning claimed data results is especially important.)

Enjoy!

D3 4.0.0

Tuesday, June 28th, 2016

Mike Bostock tweets:

After 12+ months and ~4,878 commits, I am excited to announce the release of D3 4.0! https://github.com/d3/d3/releases/v4.0.0 … #d3js

After looking at the highlights page on Github, I couldn’t in good conscience omit any of it:

D3 is now modular, composed of many small libraries that you can also use independently. Each library has its own repo and release cycle for faster development. The modular approach also improves the process for custom bundles and plugins.

There are a lot of improvements in 4.0: there were about as many commits in 4.0 as in all prior versions of D3. Some changes make D3 easier to learn and use, such as immutable selections. But there are lots of new features, too! These are covered in detail in the release notes; here are a few highlights.

Colors, Interpolators and Scales

Shapes and Layouts

Selections, Transitions, Easings and Timers

Even More!

Don’t complain to me that you are bored over the Fourth of July weekend in the United States.

Downloads: d3.zip, Source code (zip), Source code (tar.gz).

How To Get On The FBI Terrorist Watch List

Tuesday, June 28th, 2016

Thomas Neuberger published a list of activities that culmulatively, may get you on the FBI terrorist watch list: We Are All Terror Suspects Under the FBI’s Communities Against Terrorism Program.

Unfortunately, given the secrecy surrounding the FBI terrorist watch list, it isn’t possible to know which activities or to what degree are necessary to ensure your inclusion on the list.

The same is true for the no fly list, except there you will be prevented from flying, which is a definite “tell” that you are on the no fly list.

Thomas outlines the dangers of the FBI terrorist watch list, but not how we can go about defeating those dangers.

One obvious solution is to get everyone on the FBI terrorist watch list. If we are all equally suspects, the FBI will spend all its time trying to separate merely “suspects,” from “really suspects,” from “really terrorist suspects.”

To that end, think about the following:

  • Report sightings of FBI agents with unknown persons.
  • Report sightings of FBI agents with known persons.
  • Report people entering federal buildings.
  • Report people exiting federal buildings.
  • Report people entering/exiting state/local government offices.
  • Report movements of gasoline, butane, etc., trucks.
  • Report people entering/exiting airports.
  • Report people entering/leaving bars.
  • Report people buying gasoline or butane.
  • Report people buying toys.
  • Report people entering/exiting gun shops/shows.
  • etc.

The FBI increases its ignorance every day by collecting more data than it can usefully process.

Help yourself and your fellow citizens to hide in a sea of data and ignorance.

Reports your sightings to the FBI today!

PS: If that sound ineffectual, remember that the FBI was warned about Omar Mateen, twice. When, not if, a future terrorist attack happens and your accidental report of the terrorist surfaces, how will that make the FBI look?

The FBI has created a data collection madhouse for itself. Help them enjoy it.

Functor Fact @FunctorFact [+ Tip for Selling Topic Maps]

Tuesday, June 28th, 2016

JohnDCook has started @FunctorFact, tweets “..about category theory and functional programming.”

John has a page listing his Twitter accounts. It needs to be updated to reflect the addition of @FunctorFact.

BTW, just by accident I’m sure, John’s blog post for today is titled: Category theory and Koine Greek. It has the following lesson for topic map practitioners and theorists:


Another lesson from that workshop, the one I want to focus on here, is that you don’t always need to convey how you arrived at an idea. Specifically, the leader of the workshop said that if you discover something interesting from reading the New Testament in Greek, you can usually present your point persuasively using the text in your audience’s language without appealing to Greek. This isn’t always possible—you may need to explore the meaning of a Greek word or two—but you can use Greek for your personal study without necessarily sharing it publicly. The point isn’t to hide anything, only to consider your audience. In a room full of Greek scholars, bring out the Greek.

This story came up in a recent conversation about category theory. You might discover something via category theory but then share it without discussing category theory. If your audience is well versed in category theory, then go ahead and bring out your categories. But otherwise your audience might be bored or intimidated, as many people would be listening to an argument based on the finer points of Koine Greek grammar. Microsoft’s LINQ software, for example, was inspired by category theory principles, but you’d be hard pressed to find any reference to this because most programmers don’t want to know or need to know where it came from. They just want to know how to use it.

Sure, it is possible to recursively map subject identities in order to arrive at a useful and maintainable mapping between subject domains, but the people with the checkbook are only interested in a viable result.

How you got there could involve enslaved pixies for all they care. They do care about negative publicity so keep your use of pixies to yourself.

Looking forward to tweets from @FunctorFact!

Digital Rights – Privacy – Video Conference – Wednesday, June 29, 2016

Sunday, June 26th, 2016

Video conference for campus and community organizers (June 2016)

From the webpage:

student-organizing-460

Are you part of a campus or community organization concerned about digital rights?

If not, do you want to raise a voice in your community for privacy and access to the intellectual commons?

We'd like to help! EFF will host a video conference to highlight opportunities for grassroots organizers on Wednesday, June 29, 2016 at 3pm PST / 6pm EST.

We'll hear from speakers describing campaigns and events available for your group's support, as well as best practices that you might consider emulating with your friends and neighbors. We're also eager to hear from you about any digital rights campaigns on which you're working in your community, and to expose others in this growing grassroots network to social media opportunities to support your activism and organizing.

Please register to receive the link through which to participate using an open, encrypted, video chat platform.

No word on removing the tape from your video camera for this event. 😉

Spread the word about this video conference!

Another Betrayal By Cellphone – Personal Identity

Sunday, June 26th, 2016

Normal operation of the cell phone in your pocket betrays your physical location. Your location is calculated by a process known as cell phone tower triangulation. In addition to giving away your location, research shows your cell phone can betray your personal identity as well.

The abstract from: Person Identification Based on Hand Tremor Characteristics by Oana Miu, Adrian Zamfir, Corneliu Florea, reads:

A plethora of biometric measures have been proposed in the past. In this paper we introduce a new potential biometric measure: the human tremor. We present a new method for identifying the user of a handheld device using characteristics of the hand tremor measured with a smartphone built-in inertial sensors (accelerometers and gyroscopes). The main challenge of the proposed method is related to the fact that human normal tremor is very subtle while we aim to address real-life scenarios. To properly address the issue, we have relied on weighted Fourier linear combiner for retrieving only the tremor data from the hand movement and random forest for actual recognition. We have evaluated our method on a database with 10 000 samples from 17 persons reaching an accuracy of 76%.

The authors emphasize the limited size of their dataset and unexplored issues, but with an accuracy of 76% in identification mode and 98% in authentication (matching tremor to user in the database) mode, this approach merits further investigation.

Recording tremor data required no physical modification of the cell phones, only installation of an application that captured gyroscope and accelerometer data.

Before the targeting community gets too excited about having cell phone location and personal identify via tremor data, the authors do point out that personal tremor data can be recorded and used to defeat identification.

It maybe that hand tremor isn’t the killer identification mechanism but what if it were considered to be one factor of identification?

That is that hand tremor, plus location (say root terminal), plus a password, are all required for a successful login.

Building on our understanding from topic maps that identification isn’t ever a single factor, but can be multiple factors in different perspectives.

In that sense, two-factor identification demonstrates how lame our typical understanding of identity is in fact.

Failing to Ask Panama for Mossack Fonseca Documents “inexplicable?”

Saturday, June 25th, 2016

Panama Papers are available. Why hasn’t U.S. asked to see them? by Marisa Taylor and Kevin G. Hall.

From the post:

…as of June 23, Panama said it had not received a single request from the United States for access to the data seized by Panamanian authorities from Mossack Fonseca, the law firm at the heart of the Panama Papers, said Sandra Sotillo, spokeswoman for Panamanian Attorney General Kenia Porcell.

A great account of the where’s and wherefore’s of the US failure to request the seized documents that closes with this quote:


Roma Theus, another former federal prosecutor, was surprised it had taken so long to ask for the data.

“It’s not three-months difficult,” he said of the process.

He also wondered why European countries, such as Germany or England, haven’t requested the data.

“It’s a very legitimate question why they haven’t, given the enormous amount of data that’s available on potential corruption and other crimes,” Theus said. “It’s inexplicable.”

Considering the wealth and power of those who use offshore accounts to hide their funds, do you find the failure of the U.S., Germany, and England to request the data “inexplicable?”

I don’t.

Corrupt but not “inexplicable.”

After you read this story, be sure to read the others listed under The Secret Shell Game.

The Science of Scientific Writing

Saturday, June 25th, 2016

The Science of Scientific Writing by George D. Gopen and Judith A. Swan.

From the paper:

Science is often hard to read. Most people assume that its difficulties are born out of necessity, out of the extreme complexity of scientific concepts, data and analysis. We argue here that complexity of thought need not lead to impenetrability of expression; we demonstrate a number of rhetorical principles that can produce clarity in communication without oversimplifying scientific issues. The results are substantive, not merely cosmetic: Improving the quality of writing actually improves the quality of thought.

The fundamental purpose of scientific discourse is not the mere presentation of information and thought, but rather its actual communication. It does not matter how pleased an author might be to have converted all the right data into sentences and paragraphs; it matters only whether a large majority of the reading audience accurately perceives what the author had in mind. Therefore, in order to understand how best to improve writing, we would do well to understand better how readers go about reading. Such an understanding has recently become available through work done in the fields of rhetoric, linguistics and cognitive psychology. It has helped to produce a methodology based on the concept of reader expectations.

What? Evidence-based authoring? Isn’t that like evidence-based interface design?

Trying to communicate with readers on their own terms and not forcing them to tough it out?

Next thing you know, Gopen will be saying that failures to communicate in writing, are the author’s fault!

Wait!

He does:


On first reading, however, many of us arrive at the paragraph’s end without a clear sense of where we have been or where we are going. When that happens, we tend to berate ourselves for not having paid close enough attention. In reality, the fault lies not with us, but with the author. (page 9 of the pdf)

“The Science of Scientific Writing” is a great authoring by example guide.

Spending time with it can only make you a better writer.

You will be disappointed if you try to find this item from the bibliography:

Gopen, George D. 1990. The Common Sense of Writing: Teaching Writing from the Reader’s Perspective. To be published.

Worldcat.org reports one (1) copy of The Common Sense of Writing: Teaching Writing from the Reader’s Perspective is held by the Seattle University
Law Library. Good luck!

I located an interview with Dr. Gopen, which identifies these two very similar volumes:

Expectations: Teaching Writing from the Reader’s Perspective by George D. Gopen, ISBN-13: 978-0205296170, at 416 pages, 2004. (The complete story.)

The Sense of Structure: Writing from the Reader’s Perspective by George D. Gopen, ISBN-13: 978-0205296323, at 256 pages, 2004. (A textbook based on “Expectations….”)

Neither volume is cheap but when I do order, I’m going for Expectations: Teaching Writing from the Reader’s Perspective.

In the mean time, there’s enough poor writing on the Internet to keep me practicing the lessons of The Science of Scientific Writing for the foreseeable future.

Speaking of Wasted Money on DRM / WWW EME Minus 2 Billion Devices

Friday, June 24th, 2016

Just earlier today I was scribbling about wasting money on DRM saying:


I feel sorry for content owners. Their greed makes them easy prey for people selling patented DRM medicine for the delivery of their content. In the long run it only hurts themselves (the DRM tax) and users. In fact, the only people making money off of DRM are the people who deliver content.

This evening I ran across: Chrome Bug Makes It Easy to Download Movies From Netflix and Amazon Prime by Michael Nunez.

Nunez points out an exploit in the open source Chrome browser enables users to save movies from Netflix and Amazon Prime.

Even once a patch appears, others can compile the code without the patch, to continue downloading, illegally, movies from Netflix and Amazon Prime.

Even more amusing:


Widevine is currently used in more than 2 billion devices worldwide and is the same digital rights management technology used in Firefox and Opera browsers. Safari and Internet Explorer, however, use different DRM technology.

Widevine plus properly configured device = broken DRM.

When Sony and others calculate their ROI from DRM, be sure to subtract 2 billion+ devices that probably won’t honor the no-record DRM setting.

Visions of a Potential Design School

Friday, June 24th, 2016

With cautions:

design-school-460

The URL that appears in the image: http://di16.rca.ac.uk/project/the-school-of-___/.

It’s not entirely clear to me if Chrome and/or Mozilla on Ubuntu are displaying these pages correctly. I am unable to scroll within the displayed windows of text. Perhaps that is intentional.

The caution is about the quote from Twitter:

“…deconstruct the ways that they have been inculcated….”

It does not promise you will be able to deconstruct the new narrative that enables you to “deconstruct” the old one.

That is we never stand outside of all narratives, but in a different narrative than the one we have under deconstruction. (sorry)

…possibly biased? Try always biased.

Friday, June 24th, 2016

Artificial Intelligence Has a ‘Sea of Dudes’ Problem by Jack Clark.

From the post:


Much has been made of the tech industry’s lack of women engineers and executives. But there’s a unique problem with homogeneity in AI. To teach computers about the world, researchers have to gather massive data sets of almost everything. To learn to identify flowers, you need to feed a computer tens of thousands of photos of flowers so that when it sees a photograph of a daffodil in poor light, it can draw on its experience and work out what it’s seeing.

If these data sets aren’t sufficiently broad, then companies can create AIs with biases. Speech recognition software with a data set that only contains people speaking in proper, stilted British English will have a hard time understanding the slang and diction of someone from an inner city in America. If everyone teaching computers to act like humans are men, then the machines will have a view of the world that’s narrow by default and, through the curation of data sets, possibly biased.

“I call it a sea of dudes,” said Margaret Mitchell, a researcher at Microsoft. Mitchell works on computer vision and language problems, and is a founding member—and only female researcher—of Microsoft’s “cognition” group. She estimates she’s worked with around 10 or so women over the past five years, and hundreds of men. “I do absolutely believe that gender has an effect on the types of questions that we ask,” she said. “You’re putting yourself in a position of myopia.”

Margaret Mitchell makes a pragmatic case for diversity int the workplace, at least if you want to avoid male biased AI.

Not that a diverse workplace results in an “unbiased” AI, it will be a biased AI that isn’t solely male biased.

It isn’t possible to escape bias because some person or persons has to score “correct” answers for an AI. The scoring process imparts to the AI being trained, the biases of its judge of correctness.

Unless someone wants to contend there are potential human judges without biases, I don’t see a way around imparting biases to AIs.

By being sensitive to evidence of biases, we can in some cases choose the biases we want an AI to possess, but an AI possessing no biases at all, isn’t possible.

AIs are, after all, our creations so it is only fair that they be made in our image, biases and all.

Hardening the Onion [Other Apps As Well?]

Friday, June 24th, 2016

Tor coders harden the onion against surveillance by Paul Ducklin.

From the post:

A nonet of security researchers are on the warpath to protect the Tor Browser from interfering busybodies.

Tor, short for The Onion Router, is a system that aims to help you be anonymous online by disguising where you are, and where you are heading.

That way, nation-state content blockers, law enforcement agencies, oppressive regimes, intelligence services, cybercrooks, Lizard Squadders or even just overly-inquisitive neighbours can’t easily figure out where you are going when you browse online.

Similarly, sites you browse to can’t easily tell where you came from, so you can avoid being traced back or tracked over time by unscrupulous marketers, social engineers, law enforcement agencies, oppressive regimes, intelligence services, cybercrooks, Lizard Squadders, and so on.

Paul provides a high-level view of Selfrando: Securing the Tor Browser against De-anonymization Exploits by Mauro Conti, et al.

The technique generalizes beyond Tor to GNU Bash 4.3, GNU less 4.58 Nginx 1.8.0, Socat 1.7.3.0, Thttpd 2.26, and, Google’s Chromium browser.

Given the spend at which defenders play “catch up,” there is much to learn here that will be useful for years to come.

Enjoy!

Pride Goeth Before A Fall – DMCA & Security Researchers

Friday, June 24th, 2016

Cory Doctorow has written extensively on the problems with present plans to incorporate DRM in HTML5:

W3C DRM working group chairman vetoes work on protecting security researchers and competition – June 18, 2016.

An Open Letter to Members of the W3C Advisory Committee – May 12, 2016.

Save Firefox: The W3C’s plan for worldwide DRM would have killed Mozilla before it could start – May 11, 2016.

Interoperability and the W3C: Defending the Future from the Present – March 29, 2016.

among others.

In general I agree with Cory’s reasoning but I don’t see:

…Once DRM is part of a full implementation of HTML5, there’s a real risk to security researchers who discover defects in browsers and want to warn users about them…. (from Cory’s latest post)

Do you remember the Sony “copy-proof” CDs? Sony “copy-proof” CDs cracked with a marker pen Then, just as now, Sony is about to hand over bushels of cash to the content delivery crowd.

When security researchers discover flaws in the browser DRM, what prevents them from advising users?

Cory says the anti-circumvention provisions of the DMCA prevent security researchers from discovering and disclosing such flaws.

That’s no doubt true, if you want to commit a crime (violate the DMCA) and publish evidence of that crime with your name attached to it on the WWW.

Isn’t that a case of pride goeth before a fall?

If I want to alert other users to security defects in their browsers, possibly equivalent to the marker pen for Sony CDs, I post that to the WWW anonymously.

Or publish code to make that defect apparent to even a casual user.

What I should not do is put my name on either a circumvention bug report or code to demonstrate it. Yes?

That doesn’t answer Cory’s points about impairing innovation, etc. but once Sony realizes it has been had, again, by the content delivery crowd, what’s the point of more self-inflicted damage?

I feel sorry for content owners. Their greed makes them easy prey for people selling patented DRM medicine for the delivery of their content. In the long run it only hurts themselves (the DRM tax) and users. In fact, the only people making money off of DRM are the people who deliver content.

Should DRM appear as proposed in HTML5, any suggestions for a “marker pen” logo to be used by hackers of a Content Decryption Module?

PS: Another approach to opposing DRM would be to inform shareholders of Sony and other content owners they are about to be raped by content delivery systems.

PPS: In private email Cory advised me to consider the AACS encryption key controversy, where public posting of an encryption key was challenged with take down requests. However, in the long run, such efforts only spread the key more widely, not the effect intended by those attempted to limit its spread.

And there is the Dark Web, ahem, where it is my understanding that non-legal content and other material can be found.

SEC Warning: Hackers, Limit Fraud to Traditional Means

Thursday, June 23rd, 2016

U.S. SEC accuses U.K. man of hacking, fraudulent trades by Jonathan Stempel.

From the post:

The U.S. Securities and Exchange Commission sued a U.K. man it said hacked into online brokerage accounts of several U.S. investors, placed unauthorized stock trades, and within minutes made profitable trades in the same stocks in his own account.

“We will swiftly track down hackers who prey on investors as we allege Mustapha did, no matter where they are operating from and no matter how sophisticated their technology,” Robert Cohen, co-chief of the SEC enforcement division’s market abuse unit, said in a statement.

The case is SEC v Mustapha, U.S. District Court, Southern District of New York, No. 16-04805.

I can’t find the record in PACER. Perhaps it is too recent?

In any event, hackers be warned that the SEC will swiftly move to track you down should you commit fraud on investors using “sophisticated” technology.

Salting of news sources, insider trading, other, more traditional means of defrauding investors, will continue to face lackadaisical enforcement efforts.

You don’t have to take my word for it. See: Report: SEC Filed a Record Number of Enforcement Actions in FY 2015, Aggregate Fines and Penalties Declined by Kevin LaCroix.

Kevin not only talks about the numbers but also provides links to the original report, a novelty for some websites.

The lesson here is to not distinguish yourself by using modern means to commit securities fraud. The SEC is more likely to pursue you.

Is that how you read this case? 😉

Bots, Won’t You Hide Me?

Thursday, June 23rd, 2016

Emerging Trends in Social Network Analysis of Terrorism and Counterterrorism, How Police Are Scanning All Of Twitter To Detect Terrorist Threats, Violent Extremism in the Digital Age: How to Detect and Meet the Threat, Online Surveillance: …ISIS and beyond [Social Media “chaff”] are just a small sampling of posts on the detection of “terrorists” on social media.

The last one is my post illustrating how “terrorist” at one time = “anti-Vietnam war,” “civil rights,” and “gay rights.” Due to the public nature of social media, avoiding government surveillance isn’t possible.

I stole the title, Bots, Won’t You Hide Me? from Ben Bova’s short story, Stars, Won’t You Hide Me?. It’s not very long and if you like science fiction, you will enjoy it.

Bova took verses in the short story from Sinner Man, a traditional African spiritual, which was recorded by a number of artists.

All of that is a very round about way to introduce you to a new Twitter account: ConvJournalism:

All you need to know about Conversational Journalism, (journalistic) bots and #convcomm by @martinhoffmann.

Surveillance of groups on social media isn’t going to succeed, The White House Asked Social Media Companies to Look for Terrorists. Here’s Why They’d #Fail by Jenna McLaughlin bots can play an important role in assisting in that failure.

Imagine not only having bots that realistically mimic the chatter of actual human users but who follow, unfollow, etc., and engage in apparent conspiracies, with other bots. Entirely without human direction or very little.

Follow ConvJournalism and promote bot research/development that helps all of us hide. (I’d rather have the bots say yes than Satan.)

Index on Censorship Big Debate: Journalism or fiction?

Thursday, June 23rd, 2016

Index on Censorship Big Debate: Journalism or fiction? by Josie Timms.

From the webpage:

The Index on Censorship Big Debate took place at the 5th annual Leeds Big Bookend Festival this week, where journalists and authors were invited to discuss which has the biggest impact: journalism or fiction. Index’s magazine editor Rachael Jolley was joined by assistant features editor of The Yorkshire Post Chris Bond, Yorkshire-based journalist and author Yvette Huddleston and author of the award- winning Promised Land Anthony Clavane to explore which medium is more influential and why, as part of a series of Time To Talk debates held by Eurozine. Audio from the debate will be available at Time to Talk or listen below.

Highly entertaining discussion but “debate” is a bit of a stretch.

No definition of “impact” was offered, although an informal show of hands was reported to have the vast majority remembering a work of fiction that influenced them and only a distinct minority remembering a work of journalism.

Interesting result because Dickens, a journalist, was mentioned as an influential writer of fiction. At the time, fiction was published in serialized formats (newspapers, magazines) Victorian Serial Novels, spreading the cost of a work of fiction over months, if not longer.

Dickens is a good example to not make too much of the distinction, if any, between journalism and fiction. Both are reports of the past, present or projected future from a particular point of view.

At their best, journalism and fiction inform us, enlighten us, show us other points of view, capture events and details we did not witness ourselves.

That doesn’t accord with the 0 or 1 reality of our silicon servants, but I have no desire to help AIs become equal to humans by making humans dumber.

Enjoy!

The Infinite Jukebox

Wednesday, June 22nd, 2016

The Infinite Jukebox

From the FAQ:

  • What is this? For when your favorite song just isn’t long enough. This web app lets you upload a favorite MP3 and will then generate a never-ending and ever changing version of the song. It does what Infinite Gangnam Style did but for any song.
  • It never stops? – That’s right. It will play forever.
  • How does it work? – We use the Echo Nest analyzer to break the song into beats. We play the song beat by beat, but at every beat there’s a chance that we will jump to a different part of song that happens to sound very similar to the current beat. For beat similarity we look at pitch, timbre, loudness, duration and the position of the beat within a bar. There’s a nifty visualization that shows all the possible transitions that can occur at any beat.
  • Are there any ways to control the song? Yes – here are some keys:
    • [space] – Start and stop playing the song
    • [left arrow] – Decrement the current play velocity by one
    • [right arrow] – Increment the current play velocity by one
    • [Down arrow] – Sets the current play velocity to zero
    • [control] – freeze on the current beat
    • [shift] – bounce between the current beat and all of the similar sounding beats. These are the
      branch points.

    • ‘h’ – Bring it on home – toggles infinite mode off/on.
  • What do the colored blocks represent? Each block represents a beat in the song. The colors are related
    to the timbre of the music for that beat.

That should be enough to get you started. 😉

There’s a post on the Infinite Jukebox at Music Machinery.

I have mixed feelings about the Infinite Jukebox. While I appreciate its artistry and ability to make the familiar into something familiar, yet different, I also have a deep appreciation for the familiar.

Compare: While My Guitar Gently Weeps by the Beatles to Somebody to Love by Jefferson Airplane at the Infinite Jukebox.

The heart rending vocals of Grace Slick, on infinite play, become overwhelming.

I need to upload Lather. Strictly for others. I’m quite happy with the original.

Enjoy!

Shallow Reading (and Reporting)

Wednesday, June 22nd, 2016

Stefano Bertolo tweets:

bertolo-01-460

From the Chicago Tribune post:

On June 4, the satirical news site the Science Post published a block of “lorem ipsum” text under a frightening headline: “Study: 70% of Facebook users only read the headline of science stories before commenting.”

Nearly 46,000 people shared the post, some of them quite earnestly — an inadvertent example, perhaps, of life imitating comedy.

Now, as if it needed further proof, the satirical headline’s been validated once again: According to a new study by computer scientists at Columbia University and the French National Institute, 59 percent of links shared on social media have never actually been clicked: In other words, most people appear to retweet news without ever reading it.

The missing satire link:

Study: 70% of Facebook users only read the headline of science stories before commenting, from the satirical news site Science Post.

The passage:

According to a new study by computer scientists at Columbia University and the French National Institute, 59 percent of links shared on social media have never actually been clicked: In other words, most people appear to retweet news without ever reading it.

should have included a link to: Social Clicks: What and Who Gets Read on Twitter?, by Maksym Gabielkov, Arthi Ramachandran, Augustin Chaintreau, Arnaud Legout.

Careful readers, however, would have followed the link to Social Clicks: What and Who Gets Read on Twitter?, only to discover that Dewey mis-reported the original article.

Here’s how to identify the mis-reporting:

First, as technical articles often do, the authors started with definitions. Definitions that will influence everything you read in that article.


In the rest of this article, we will use the following terms to describe a given URL or online article.

Shares. Number of times a URL has been published in tweets. An original tweet containing the URL or a retweet of this tweet are both considered as a new share.
…(emphasis in the original)

The important point is to remember: Every tweet counts as a “share.” If I post a tweet that is never retweeted by anyone, it goes into the share bucket and is one of the shares that was never clicked on.

That is going to impact our counting of “shares” that were never “clicked on.”

In section 3.3 Blockbusters and the share button, the authors write:


First, 59% of the shared URLs are never clicked or, as we call them, silent. Note that we merged URLs pointing to the same article, so out of 10 articles mentioned on Twitter, 6 typically on niche topics are never clicked 10.

Because silent URLs are so common, they actually account for a significant fraction (15%) of the whole shares we collected, more than one out of seven. An interesting paradox is that there seems to be vastly more niche content that users are willing to mention in Twitter than the content that they are actually willing to click on.
… (emphasis in the original)

To re-write that with the definition of shared inserted:

“…59% of the URLs published in a tweet or re-tweet are never clicked…”

That includes:

  1. Tweet with a URL and no one clicks on the shortened URL in bit.ly
  2. Re-tweet with a URL and a click on the shortened URL in bit.ly

Since tweets and re-tweets are lumped together (they may not be in the data, I haven’t seen it, yet), it isn’t possible to say how many re-tweets occurred without corresponding clicks on the shortened URLs.

I’m certain people share tweets without visiting URLs but this article isn’t authority for percentages on that claim.

Not only should you visit URLs but you should also read carefully what you find, before re-tweeting or reporting.