The Feynman Technique – Contest for Balisage 2016?

June 28th, 2016

The Best Way to Learn Anything: The Feynman Technique by Shane Parrish.

From the post:

There are four simple steps to the Feynman Technique, which I’ll explain below:

  1. Choose a Concept
  2. Teach it to a Toddler
  3. Identify Gaps and Go Back to The Source Material
  4. Review and Simplify

This made me think of the late-breaking Balisage 2016 papers posted by Tommie Usdin in email:

  • Saxon-JS – XSLT 3.0 in the Browser, by Debbie Lockett and Michael Kay, Saxonica
  • A MicroXPath for MicroXML (AKA A New, Simpler Way of Looking at XML Data Content), by Uche Ogbuji, Zepheira
  • A catalog of Functional programming idioms in XQuery 3.1, James Fuller, MarkLogic

New contest for Balisage?

Pick a concept from a Balisage 2016 presentation and you have sixty (60) seconds to explain it to Balisage attendees.

What do you think?

Remember, you can’t play if you don’t attend! Register today!

If Tommie agrees, the winner gets me to record a voice mail greeting for their phone! ;-)

Integrated R labs for high school students

June 28th, 2016

Integrated R labs for high school students by Amelia McNamara.

From the webpage:

Amelia McNamara, James Molyneux, Terri Johnson

This looks like a very promising approach for capturing the interests of high school students in statistics and R.

From the larger project, Mobilize, curriculum page:

Mobilize centers its curricula around participatory sensing campaigns in which students use their mobile devices to collect and share data about their communities and their lives, and to analyze these data to gain a greater understanding about their world.Mobilize breaks barriers by teaching students to apply concepts and practices from computer science and statistics in order to learn science and mathematics. Mobilize is dynamic: each class collects its own data, and each class has the opportunity to make unique discoveries. We use mobile devices not as gimmicks to capture students’ attention, but as legitimate tools that bring scientific enquiry into our everyday lives.

Mobilize comprises four key curricula: Introduction to Data Science (IDS), Algebra I, Biology, and Mobilize Prime, all focused on preparing students to live in a data-driven world. The Mobilize curricula are a unique blend of computational and statistical thinking subject matter content that teaches students to think critically about and with data. The Mobilize curricula utilize innovative mobile technology to enhance math and science classroom learning. Mobilize brings “Big Data” into the classroom in the form of participatory sensing, a hands-on method in which students use mobile devices to collect data about their lives and community, then use Mobilize Visualization tools to analyze and interpret the data.

I like the approach of having the student collect their own and process their own data. If they learn to question their own data and processes, hopefully they will ask questions about data processing results presented as “facts.” (Since 2016 is a presidential election year in the United States, questioning claimed data results is especially important.)

Enjoy!

D3 4.0.0

June 28th, 2016

Mike Bostock tweets:

After 12+ months and ~4,878 commits, I am excited to announce the release of D3 4.0! https://github.com/d3/d3/releases/v4.0.0 … #d3js

After looking at the highlights page on Github, I couldn’t in good conscience omit any of it:

D3 is now modular, composed of many small libraries that you can also use independently. Each library has its own repo and release cycle for faster development. The modular approach also improves the process for custom bundles and plugins.

There are a lot of improvements in 4.0: there were about as many commits in 4.0 as in all prior versions of D3. Some changes make D3 easier to learn and use, such as immutable selections. But there are lots of new features, too! These are covered in detail in the release notes; here are a few highlights.

Colors, Interpolators and Scales

Shapes and Layouts

Selections, Transitions, Easings and Timers

Even More!

Don’t complain to me that you are bored over the Fourth of July weekend in the United States.

Downloads: d3.zip, Source code (zip), Source code (tar.gz).

How To Get On The FBI Terrorist Watch List

June 28th, 2016

Thomas Neuberger published a list of activities that culmulatively, may get you on the FBI terrorist watch list: We Are All Terror Suspects Under the FBI’s Communities Against Terrorism Program.

Unfortunately, given the secrecy surrounding the FBI terrorist watch list, it isn’t possible to know which activities or to what degree are necessary to ensure your inclusion on the list.

The same is true for the no fly list, except there you will be prevented from flying, which is a definite “tell” that you are on the no fly list.

Thomas outlines the dangers of the FBI terrorist watch list, but not how we can go about defeating those dangers.

One obvious solution is to get everyone on the FBI terrorist watch list. If we are all equally suspects, the FBI will spend all its time trying to separate merely “suspects,” from “really suspects,” from “really terrorist suspects.”

To that end, think about the following:

  • Report sightings of FBI agents with unknown persons.
  • Report sightings of FBI agents with known persons.
  • Report people entering federal buildings.
  • Report people exiting federal buildings.
  • Report people entering/exiting state/local government offices.
  • Report movements of gasoline, butane, etc., trucks.
  • Report people entering/exiting airports.
  • Report people entering/leaving bars.
  • Report people buying gasoline or butane.
  • Report people buying toys.
  • Report people entering/exiting gun shops/shows.
  • etc.

The FBI increases its ignorance every day by collecting more data than it can usefully process.

Help yourself and your fellow citizens to hide in a sea of data and ignorance.

Reports your sightings to the FBI today!

PS: If that sound ineffectual, remember that the FBI was warned about Omar Mateen, twice. When, not if, a future terrorist attack happens and your accidental report of the terrorist surfaces, how will that make the FBI look?

The FBI has created a data collection madhouse for itself. Help them enjoy it.

Functor Fact @FunctorFact [+ Tip for Selling Topic Maps]

June 28th, 2016

JohnDCook has started @FunctorFact, tweets “..about category theory and functional programming.”

John has a page listing his Twitter accounts. It needs to be updated to reflect the addition of @FunctorFact.

BTW, just by accident I’m sure, John’s blog post for today is titled: Category theory and Koine Greek. It has the following lesson for topic map practitioners and theorists:


Another lesson from that workshop, the one I want to focus on here, is that you don’t always need to convey how you arrived at an idea. Specifically, the leader of the workshop said that if you discover something interesting from reading the New Testament in Greek, you can usually present your point persuasively using the text in your audience’s language without appealing to Greek. This isn’t always possible—you may need to explore the meaning of a Greek word or two—but you can use Greek for your personal study without necessarily sharing it publicly. The point isn’t to hide anything, only to consider your audience. In a room full of Greek scholars, bring out the Greek.

This story came up in a recent conversation about category theory. You might discover something via category theory but then share it without discussing category theory. If your audience is well versed in category theory, then go ahead and bring out your categories. But otherwise your audience might be bored or intimidated, as many people would be listening to an argument based on the finer points of Koine Greek grammar. Microsoft’s LINQ software, for example, was inspired by category theory principles, but you’d be hard pressed to find any reference to this because most programmers don’t want to know or need to know where it came from. They just want to know how to use it.

Sure, it is possible to recursively map subject identities in order to arrive at a useful and maintainable mapping between subject domains, but the people with the checkbook are only interested in a viable result.

How you got there could involve enslaved pixies for all they care. They do care about negative publicity so keep your use of pixies to yourself.

Looking forward to tweets from @FunctorFact!

Digital Rights – Privacy – Video Conference – Wednesday, June 29, 2016

June 26th, 2016

Video conference for campus and community organizers (June 2016)

From the webpage:

student-organizing-460

Are you part of a campus or community organization concerned about digital rights?

If not, do you want to raise a voice in your community for privacy and access to the intellectual commons?

We'd like to help! EFF will host a video conference to highlight opportunities for grassroots organizers on Wednesday, June 29, 2016 at 3pm PST / 6pm EST.

We'll hear from speakers describing campaigns and events available for your group's support, as well as best practices that you might consider emulating with your friends and neighbors. We're also eager to hear from you about any digital rights campaigns on which you're working in your community, and to expose others in this growing grassroots network to social media opportunities to support your activism and organizing.

Please register to receive the link through which to participate using an open, encrypted, video chat platform.

No word on removing the tape from your video camera for this event. ;-)

Spread the word about this video conference!

Another Betrayal By Cellphone – Personal Identity

June 26th, 2016

Normal operation of the cell phone in your pocket betrays your physical location. Your location is calculated by a process known as cell phone tower triangulation. In addition to giving away your location, research shows your cell phone can betray your personal identity as well.

The abstract from: Person Identification Based on Hand Tremor Characteristics by Oana Miu, Adrian Zamfir, Corneliu Florea, reads:

A plethora of biometric measures have been proposed in the past. In this paper we introduce a new potential biometric measure: the human tremor. We present a new method for identifying the user of a handheld device using characteristics of the hand tremor measured with a smartphone built-in inertial sensors (accelerometers and gyroscopes). The main challenge of the proposed method is related to the fact that human normal tremor is very subtle while we aim to address real-life scenarios. To properly address the issue, we have relied on weighted Fourier linear combiner for retrieving only the tremor data from the hand movement and random forest for actual recognition. We have evaluated our method on a database with 10 000 samples from 17 persons reaching an accuracy of 76%.

The authors emphasize the limited size of their dataset and unexplored issues, but with an accuracy of 76% in identification mode and 98% in authentication (matching tremor to user in the database) mode, this approach merits further investigation.

Recording tremor data required no physical modification of the cell phones, only installation of an application that captured gyroscope and accelerometer data.

Before the targeting community gets too excited about having cell phone location and personal identify via tremor data, the authors do point out that personal tremor data can be recorded and used to defeat identification.

It maybe that hand tremor isn’t the killer identification mechanism but what if it were considered to be one factor of identification?

That is that hand tremor, plus location (say root terminal), plus a password, are all required for a successful login.

Building on our understanding from topic maps that identification isn’t ever a single factor, but can be multiple factors in different perspectives.

In that sense, two-factor identification demonstrates how lame our typical understanding of identity is in fact.

Failing to Ask Panama for Mossack Fonseca Documents “inexplicable?”

June 25th, 2016

Panama Papers are available. Why hasn’t U.S. asked to see them? by Marisa Taylor and Kevin G. Hall.

From the post:

…as of June 23, Panama said it had not received a single request from the United States for access to the data seized by Panamanian authorities from Mossack Fonseca, the law firm at the heart of the Panama Papers, said Sandra Sotillo, spokeswoman for Panamanian Attorney General Kenia Porcell.

A great account of the where’s and wherefore’s of the US failure to request the seized documents that closes with this quote:


Roma Theus, another former federal prosecutor, was surprised it had taken so long to ask for the data.

“It’s not three-months difficult,” he said of the process.

He also wondered why European countries, such as Germany or England, haven’t requested the data.

“It’s a very legitimate question why they haven’t, given the enormous amount of data that’s available on potential corruption and other crimes,” Theus said. “It’s inexplicable.”

Considering the wealth and power of those who use offshore accounts to hide their funds, do you find the failure of the U.S., Germany, and England to request the data “inexplicable?”

I don’t.

Corrupt but not “inexplicable.”

After you read this story, be sure to read the others listed under The Secret Shell Game.

The Science of Scientific Writing

June 25th, 2016

The Science of Scientific Writing by George D. Gopen and Judith A. Swan.

From the paper:

Science is often hard to read. Most people assume that its difficulties are born out of necessity, out of the extreme complexity of scientific concepts, data and analysis. We argue here that complexity of thought need not lead to impenetrability of expression; we demonstrate a number of rhetorical principles that can produce clarity in communication without oversimplifying scientific issues. The results are substantive, not merely cosmetic: Improving the quality of writing actually improves the quality of thought.

The fundamental purpose of scientific discourse is not the mere presentation of information and thought, but rather its actual communication. It does not matter how pleased an author might be to have converted all the right data into sentences and paragraphs; it matters only whether a large majority of the reading audience accurately perceives what the author had in mind. Therefore, in order to understand how best to improve writing, we would do well to understand better how readers go about reading. Such an understanding has recently become available through work done in the fields of rhetoric, linguistics and cognitive psychology. It has helped to produce a methodology based on the concept of reader expectations.

What? Evidence-based authoring? Isn’t that like evidence-based interface design?

Trying to communicate with readers on their own terms and not forcing them to tough it out?

Next thing you know, Gopen will be saying that failures to communicate in writing, are the author’s fault!

Wait!

He does:


On first reading, however, many of us arrive at the paragraph’s end without a clear sense of where we have been or where we are going. When that happens, we tend to berate ourselves for not having paid close enough attention. In reality, the fault lies not with us, but with the author. (page 9 of the pdf)

“The Science of Scientific Writing” is a great authoring by example guide.

Spending time with it can only make you a better writer.

You will be disappointed if you try to find this item from the bibliography:

Gopen, George D. 1990. The Common Sense of Writing: Teaching Writing from the Reader’s Perspective. To be published.

Worldcat.org reports one (1) copy of The Common Sense of Writing: Teaching Writing from the Reader’s Perspective is held by the Seattle University
Law Library. Good luck!

I located an interview with Dr. Gopen, which identifies these two very similar volumes:

Expectations: Teaching Writing from the Reader’s Perspective by George D. Gopen, ISBN-13: 978-0205296170, at 416 pages, 2004. (The complete story.)

The Sense of Structure: Writing from the Reader’s Perspective by George D. Gopen, ISBN-13: 978-0205296323, at 256 pages, 2004. (A textbook based on “Expectations….”)

Neither volume is cheap but when I do order, I’m going for Expectations: Teaching Writing from the Reader’s Perspective.

In the mean time, there’s enough poor writing on the Internet to keep me practicing the lessons of The Science of Scientific Writing for the foreseeable future.

Speaking of Wasted Money on DRM / WWW EME Minus 2 Billion Devices

June 24th, 2016

Just earlier today I was scribbling about wasting money on DRM saying:


I feel sorry for content owners. Their greed makes them easy prey for people selling patented DRM medicine for the delivery of their content. In the long run it only hurts themselves (the DRM tax) and users. In fact, the only people making money off of DRM are the people who deliver content.

This evening I ran across: Chrome Bug Makes It Easy to Download Movies From Netflix and Amazon Prime by Michael Nunez.

Nunez points out an exploit in the open source Chrome browser enables users to save movies from Netflix and Amazon Prime.

Even once a patch appears, others can compile the code without the patch, to continue downloading, illegally, movies from Netflix and Amazon Prime.

Even more amusing:


Widevine is currently used in more than 2 billion devices worldwide and is the same digital rights management technology used in Firefox and Opera browsers. Safari and Internet Explorer, however, use different DRM technology.

Widevine plus properly configured device = broken DRM.

When Sony and others calculate their ROI from DRM, be sure to subtract 2 billion+ devices that probably won’t honor the no-record DRM setting.

Visions of a Potential Design School

June 24th, 2016

With cautions:

design-school-460

The URL that appears in the image: http://di16.rca.ac.uk/project/the-school-of-___/.

It’s not entirely clear to me if Chrome and/or Mozilla on Ubuntu are displaying these pages correctly. I am unable to scroll within the displayed windows of text. Perhaps that is intentional.

The caution is about the quote from Twitter:

“…deconstruct the ways that they have been inculcated….”

It does not promise you will be able to deconstruct the new narrative that enables you to “deconstruct” the old one.

That is we never stand outside of all narratives, but in a different narrative than the one we have under deconstruction. (sorry)

…possibly biased? Try always biased.

June 24th, 2016

Artificial Intelligence Has a ‘Sea of Dudes’ Problem by Jack Clark.

From the post:


Much has been made of the tech industry’s lack of women engineers and executives. But there’s a unique problem with homogeneity in AI. To teach computers about the world, researchers have to gather massive data sets of almost everything. To learn to identify flowers, you need to feed a computer tens of thousands of photos of flowers so that when it sees a photograph of a daffodil in poor light, it can draw on its experience and work out what it’s seeing.

If these data sets aren’t sufficiently broad, then companies can create AIs with biases. Speech recognition software with a data set that only contains people speaking in proper, stilted British English will have a hard time understanding the slang and diction of someone from an inner city in America. If everyone teaching computers to act like humans are men, then the machines will have a view of the world that’s narrow by default and, through the curation of data sets, possibly biased.

“I call it a sea of dudes,” said Margaret Mitchell, a researcher at Microsoft. Mitchell works on computer vision and language problems, and is a founding member—and only female researcher—of Microsoft’s “cognition” group. She estimates she’s worked with around 10 or so women over the past five years, and hundreds of men. “I do absolutely believe that gender has an effect on the types of questions that we ask,” she said. “You’re putting yourself in a position of myopia.”

Margaret Mitchell makes a pragmatic case for diversity int the workplace, at least if you want to avoid male biased AI.

Not that a diverse workplace results in an “unbiased” AI, it will be a biased AI that isn’t solely male biased.

It isn’t possible to escape bias because some person or persons has to score “correct” answers for an AI. The scoring process imparts to the AI being trained, the biases of its judge of correctness.

Unless someone wants to contend there are potential human judges without biases, I don’t see a way around imparting biases to AIs.

By being sensitive to evidence of biases, we can in some cases choose the biases we want an AI to possess, but an AI possessing no biases at all, isn’t possible.

AIs are, after all, our creations so it is only fair that they be made in our image, biases and all.

Hardening the Onion [Other Apps As Well?]

June 24th, 2016

Tor coders harden the onion against surveillance by Paul Ducklin.

From the post:

A nonet of security researchers are on the warpath to protect the Tor Browser from interfering busybodies.

Tor, short for The Onion Router, is a system that aims to help you be anonymous online by disguising where you are, and where you are heading.

That way, nation-state content blockers, law enforcement agencies, oppressive regimes, intelligence services, cybercrooks, Lizard Squadders or even just overly-inquisitive neighbours can’t easily figure out where you are going when you browse online.

Similarly, sites you browse to can’t easily tell where you came from, so you can avoid being traced back or tracked over time by unscrupulous marketers, social engineers, law enforcement agencies, oppressive regimes, intelligence services, cybercrooks, Lizard Squadders, and so on.

Paul provides a high-level view of Selfrando: Securing the Tor Browser against De-anonymization Exploits by Mauro Conti, et al.

The technique generalizes beyond Tor to GNU Bash 4.3, GNU less 4.58 Nginx 1.8.0, Socat 1.7.3.0, Thttpd 2.26, and, Google’s Chromium browser.

Given the spend at which defenders play “catch up,” there is much to learn here that will be useful for years to come.

Enjoy!

Pride Goeth Before A Fall – DMCA & Security Researchers

June 24th, 2016

Cory Doctorow has written extensively on the problems with present plans to incorporate DRM in HTML5:

W3C DRM working group chairman vetoes work on protecting security researchers and competition – June 18, 2016.

An Open Letter to Members of the W3C Advisory Committee – May 12, 2016.

Save Firefox: The W3C’s plan for worldwide DRM would have killed Mozilla before it could start – May 11, 2016.

Interoperability and the W3C: Defending the Future from the Present – March 29, 2016.

among others.

In general I agree with Cory’s reasoning but I don’t see:

…Once DRM is part of a full implementation of HTML5, there’s a real risk to security researchers who discover defects in browsers and want to warn users about them…. (from Cory’s latest post)

Do you remember the Sony “copy-proof” CDs? Sony “copy-proof” CDs cracked with a marker pen Then, just as now, Sony is about to hand over bushels of cash to the content delivery crowd.

When security researchers discover flaws in the browser DRM, what prevents them from advising users?

Cory says the anti-circumvention provisions of the DMCA prevent security researchers from discovering and disclosing such flaws.

That’s no doubt true, if you want to commit a crime (violate the DMCA) and publish evidence of that crime with your name attached to it on the WWW.

Isn’t that a case of pride goeth before a fall?

If I want to alert other users to security defects in their browsers, possibly equivalent to the marker pen for Sony CDs, I post that to the WWW anonymously.

Or publish code to make that defect apparent to even a casual user.

What I should not do is put my name on either a circumvention bug report or code to demonstrate it. Yes?

That doesn’t answer Cory’s points about impairing innovation, etc. but once Sony realizes it has been had, again, by the content delivery crowd, what’s the point of more self-inflicted damage?

I feel sorry for content owners. Their greed makes them easy prey for people selling patented DRM medicine for the delivery of their content. In the long run it only hurts themselves (the DRM tax) and users. In fact, the only people making money off of DRM are the people who deliver content.

Should DRM appear as proposed in HTML5, any suggestions for a “marker pen” logo to be used by hackers of a Content Decryption Module?

PS: Another approach to opposing DRM would be to inform shareholders of Sony and other content owners they are about to be raped by content delivery systems.

PPS: In private email Cory advised me to consider the AACS encryption key controversy, where public posting of an encryption key was challenged with take down requests. However, in the long run, such efforts only spread the key more widely, not the effect intended by those attempted to limit its spread.

And there is the Dark Web, ahem, where it is my understanding that non-legal content and other material can be found.

SEC Warning: Hackers, Limit Fraud to Traditional Means

June 23rd, 2016

U.S. SEC accuses U.K. man of hacking, fraudulent trades by Jonathan Stempel.

From the post:

The U.S. Securities and Exchange Commission sued a U.K. man it said hacked into online brokerage accounts of several U.S. investors, placed unauthorized stock trades, and within minutes made profitable trades in the same stocks in his own account.

“We will swiftly track down hackers who prey on investors as we allege Mustapha did, no matter where they are operating from and no matter how sophisticated their technology,” Robert Cohen, co-chief of the SEC enforcement division’s market abuse unit, said in a statement.

The case is SEC v Mustapha, U.S. District Court, Southern District of New York, No. 16-04805.

I can’t find the record in PACER. Perhaps it is too recent?

In any event, hackers be warned that the SEC will swiftly move to track you down should you commit fraud on investors using “sophisticated” technology.

Salting of news sources, insider trading, other, more traditional means of defrauding investors, will continue to face lackadaisical enforcement efforts.

You don’t have to take my word for it. See: Report: SEC Filed a Record Number of Enforcement Actions in FY 2015, Aggregate Fines and Penalties Declined by Kevin LaCroix.

Kevin not only talks about the numbers but also provides links to the original report, a novelty for some websites.

The lesson here is to not distinguish yourself by using modern means to commit securities fraud. The SEC is more likely to pursue you.

Is that how you read this case? ;-)

Bots, Won’t You Hide Me?

June 23rd, 2016

Emerging Trends in Social Network Analysis of Terrorism and Counterterrorism, How Police Are Scanning All Of Twitter To Detect Terrorist Threats, Violent Extremism in the Digital Age: How to Detect and Meet the Threat, Online Surveillance: …ISIS and beyond [Social Media “chaff”] are just a small sampling of posts on the detection of “terrorists” on social media.

The last one is my post illustrating how “terrorist” at one time = “anti-Vietnam war,” “civil rights,” and “gay rights.” Due to the public nature of social media, avoiding government surveillance isn’t possible.

I stole the title, Bots, Won’t You Hide Me? from Ben Bova’s short story, Stars, Won’t You Hide Me?. It’s not very long and if you like science fiction, you will enjoy it.

Bova took verses in the short story from Sinner Man, a traditional African spiritual, which was recorded by a number of artists.

All of that is a very round about way to introduce you to a new Twitter account: ConvJournalism:

All you need to know about Conversational Journalism, (journalistic) bots and #convcomm by @martinhoffmann.

Surveillance of groups on social media isn’t going to succeed, The White House Asked Social Media Companies to Look for Terrorists. Here’s Why They’d #Fail by Jenna McLaughlin bots can play an important role in assisting in that failure.

Imagine not only having bots that realistically mimic the chatter of actual human users but who follow, unfollow, etc., and engage in apparent conspiracies, with other bots. Entirely without human direction or very little.

Follow ConvJournalism and promote bot research/development that helps all of us hide. (I’d rather have the bots say yes than Satan.)

Index on Censorship Big Debate: Journalism or fiction?

June 23rd, 2016

Index on Censorship Big Debate: Journalism or fiction? by Josie Timms.

From the webpage:

The Index on Censorship Big Debate took place at the 5th annual Leeds Big Bookend Festival this week, where journalists and authors were invited to discuss which has the biggest impact: journalism or fiction. Index’s magazine editor Rachael Jolley was joined by assistant features editor of The Yorkshire Post Chris Bond, Yorkshire-based journalist and author Yvette Huddleston and author of the award- winning Promised Land Anthony Clavane to explore which medium is more influential and why, as part of a series of Time To Talk debates held by Eurozine. Audio from the debate will be available at Time to Talk or listen below.

Highly entertaining discussion but “debate” is a bit of a stretch.

No definition of “impact” was offered, although an informal show of hands was reported to have the vast majority remembering a work of fiction that influenced them and only a distinct minority remembering a work of journalism.

Interesting result because Dickens, a journalist, was mentioned as an influential writer of fiction. At the time, fiction was published in serialized formats (newspapers, magazines) Victorian Serial Novels, spreading the cost of a work of fiction over months, if not longer.

Dickens is a good example to not make too much of the distinction, if any, between journalism and fiction. Both are reports of the past, present or projected future from a particular point of view.

At their best, journalism and fiction inform us, enlighten us, show us other points of view, capture events and details we did not witness ourselves.

That doesn’t accord with the 0 or 1 reality of our silicon servants, but I have no desire to help AIs become equal to humans by making humans dumber.

Enjoy!

The Infinite Jukebox

June 22nd, 2016

The Infinite Jukebox

From the FAQ:

  • What is this? For when your favorite song just isn’t long enough. This web app lets you upload a favorite MP3 and will then generate a never-ending and ever changing version of the song. It does what Infinite Gangnam Style did but for any song.
  • It never stops? – That’s right. It will play forever.
  • How does it work? – We use the Echo Nest analyzer to break the song into beats. We play the song beat by beat, but at every beat there’s a chance that we will jump to a different part of song that happens to sound very similar to the current beat. For beat similarity we look at pitch, timbre, loudness, duration and the position of the beat within a bar. There’s a nifty visualization that shows all the possible transitions that can occur at any beat.
  • Are there any ways to control the song? Yes – here are some keys:
    • [space] – Start and stop playing the song
    • [left arrow] – Decrement the current play velocity by one
    • [right arrow] – Increment the current play velocity by one
    • [Down arrow] – Sets the current play velocity to zero
    • [control] – freeze on the current beat
    • [shift] – bounce between the current beat and all of the similar sounding beats. These are the
      branch points.

    • ‘h’ – Bring it on home – toggles infinite mode off/on.
  • What do the colored blocks represent? Each block represents a beat in the song. The colors are related
    to the timbre of the music for that beat.

That should be enough to get you started. ;-)

There’s a post on the Infinite Jukebox at Music Machinery.

I have mixed feelings about the Infinite Jukebox. While I appreciate its artistry and ability to make the familiar into something familiar, yet different, I also have a deep appreciation for the familiar.

Compare: While My Guitar Gently Weeps by the Beatles to Somebody to Love by Jefferson Airplane at the Infinite Jukebox.

The heart rending vocals of Grace Slick, on infinite play, become overwhelming.

I need to upload Lather. Strictly for others. I’m quite happy with the original.

Enjoy!

Shallow Reading (and Reporting)

June 22nd, 2016

Stefano Bertolo tweets:

bertolo-01-460

From the Chicago Tribune post:

On June 4, the satirical news site the Science Post published a block of “lorem ipsum” text under a frightening headline: “Study: 70% of Facebook users only read the headline of science stories before commenting.”

Nearly 46,000 people shared the post, some of them quite earnestly — an inadvertent example, perhaps, of life imitating comedy.

Now, as if it needed further proof, the satirical headline’s been validated once again: According to a new study by computer scientists at Columbia University and the French National Institute, 59 percent of links shared on social media have never actually been clicked: In other words, most people appear to retweet news without ever reading it.

The missing satire link:

Study: 70% of Facebook users only read the headline of science stories before commenting, from the satirical news site Science Post.

The passage:

According to a new study by computer scientists at Columbia University and the French National Institute, 59 percent of links shared on social media have never actually been clicked: In other words, most people appear to retweet news without ever reading it.

should have included a link to: Social Clicks: What and Who Gets Read on Twitter?, by Maksym Gabielkov, Arthi Ramachandran, Augustin Chaintreau, Arnaud Legout.

Careful readers, however, would have followed the link to Social Clicks: What and Who Gets Read on Twitter?, only to discover that Dewey mis-reported the original article.

Here’s how to identify the mis-reporting:

First, as technical articles often do, the authors started with definitions. Definitions that will influence everything you read in that article.


In the rest of this article, we will use the following terms to describe a given URL or online article.

Shares. Number of times a URL has been published in tweets. An original tweet containing the URL or a retweet of this tweet are both considered as a new share.
…(emphasis in the original)

The important point is to remember: Every tweet counts as a “share.” If I post a tweet that is never retweeted by anyone, it goes into the share bucket and is one of the shares that was never clicked on.

That is going to impact our counting of “shares” that were never “clicked on.”

In section 3.3 Blockbusters and the share button, the authors write:


First, 59% of the shared URLs are never clicked or, as we call them, silent. Note that we merged URLs pointing to the same article, so out of 10 articles mentioned on Twitter, 6 typically on niche topics are never clicked 10.

Because silent URLs are so common, they actually account for a significant fraction (15%) of the whole shares we collected, more than one out of seven. An interesting paradox is that there seems to be vastly more niche content that users are willing to mention in Twitter than the content that they are actually willing to click on.
… (emphasis in the original)

To re-write that with the definition of shared inserted:

“…59% of the URLs published in a tweet or re-tweet are never clicked…”

That includes:

  1. Tweet with a URL and no one clicks on the shortened URL in bit.ly
  2. Re-tweet with a URL and a click on the shortened URL in bit.ly

Since tweets and re-tweets are lumped together (they may not be in the data, I haven’t seen it, yet), it isn’t possible to say how many re-tweets occurred without corresponding clicks on the shortened URLs.

I’m certain people share tweets without visiting URLs but this article isn’t authority for percentages on that claim.

Not only should you visit URLs but you should also read carefully what you find, before re-tweeting or reporting.

The No-Value-Add Of Academic Publishers And Peer Review

June 21st, 2016

Comparing Published Scientific Journal Articles to Their Pre-print Versions by Martin Klein, Peter Broadwell, Sharon E. Farb, Todd Grappone.

Abstract:

Academic publishers claim that they add value to scholarly communications by coordinating reviews and contributing and enhancing text during publication. These contributions come at a considerable cost: U.S. academic libraries paid $1.7 billion for serial subscriptions in 2008 alone. Library budgets, in contrast, are flat and not able to keep pace with serial price inflation. We have investigated the publishers’ value proposition by conducting a comparative study of pre-print papers and their final published counterparts. This comparison had two working assumptions: 1) if the publishers’ argument is valid, the text of a pre-print paper should vary measurably from its corresponding final published version, and 2) by applying standard similarity measures, we should be able to detect and quantify such differences. Our analysis revealed that the text contents of the scientific papers generally changed very little from their pre-print to final published versions. These findings contribute empirical indicators to discussions of the added value of commercial publishers and therefore should influence libraries’ economic decisions regarding access to scholarly publications.

The authors have performed a very detailed analysis of pre-prints, 90% – 95% of which are published as open pre-prints first, to conclude there is no appreciable difference between the pre-prints and the final published versions.

I take “…no appreciable difference…” to mean academic publishers and the peer review process, despite claims to the contrary, contribute little or no value to academic publications.

How’s that for a bargaining chip in negotiating subscription prices?

Tapping Into The Terror Money Stream

June 21st, 2016

Can ISIS Take Down D.C.? by Jeff Stein.

From the post:


If the federal government is good at anything, however, it’s throwing money at threats. Since 2003, taxpayers have contributed $1.3 billion to the feds’ BioWatch program, a network of pathogen detectors deployed in D.C. and 33 other cities (plus at so-called national security events like the Super Bowl), despite persistent questions about its need and reliability. In 2013, Republican Representative Tim Murphy of Pennsylvania, chairman of the House Energy and Commerce Committee’s Oversight and Investigations subcommittee, called it a “boondoggle.” Jeh Johnson, who took over the reins of the Department of Homeland Security (DHS) in late 2013, evidently agreed. One of his first acts was to cancel a planned third generation of the program, but the rest of it is still running.

“The BioWatch program was a mistake from the start,” a former top federal emergency medicine official tells Newsweek on condition of anonymity, saying he fears retaliation from the government for speaking out. The well-known problems with the detectors, he says, are both highly technical and practical. “Any sort of thing can blow into its filter papers, and then you are wrapping yourself around an axle,” trying to figure out if it’s real. Of the 149 suspected pathogen samples collected by BioWatch detectors nationwide, he reports, “none were a threat to public health.” A 2003 tularemia alarm in Texas was traced to a dead rabbit.

Michael Sheehan, a former top Pentagon, State Department and New York Police Department counterterrorism official, echoes such assessments. “The technology didn’t work, and I had no confidence that it ever would,” he tells Newsweek. The immense amounts of time and money devoted to it, he adds, could’ve been better spent “protecting dangerous pathogens stored in city hospitals from falling into the wrong hands.” When he sought to explore that angle at the NYPD, the Centers for Disease Control and Prevention “initially would not tell us where they were until I sent two detectives to Atlanta to find out,” he says. “And they did, and we helped the hospitals with their security—and they were happy for the assistance.”

Even if BioWatch performed as touted, Sheehan and others say, a virus would be virtually out of control and sending scores of people to emergency rooms by the time air samples were gathered, analyzed and the horrific results distributed to first responders. BioWatch, Sheehan suggests, is a billion-dollar hammer looking for a nail, since “weaponizing biological agents is incredibly hard to do,” and even ISIS, which theoretically has the scientific assets to pursue such weapons, has shown little sustained interest in them. Plus, extremists of all denominations have demonstrated over the decades that they like things that go boom (or tat-tat-tat, the sound of an assault rifle). So the $1.1 billion spent on BioWatch is way out of proportion to the risk, critics argue. What’s really driving programs like BioWatch, Sheehan says—beside fears of leaving any potential threat uncovered, no matter how small—is the opportunity it gives members of Congress to lard out pork to research universities and contractors back home.

Considering that two people, one rifle, terrorized the D.C. area for 23 days, The Beltway Snipers, Part 1, The Beltway Snipers, Part 2, I would have to say yes, ISIS can take down D.C.

Even if they limit themselves to “…things that go boom (or tat-tat-tat, the sound of an assault rifle).” (You have to wonder about the quality of their “terrorist” training.)

But in order to get funding, you have to discover a scenario that isn’t fully occupied by contractors.

Quite recently I read of an effort to detect the possible onset of terror attacks based on social media traffic. Except there is no evidence that random social media group traffic picks up before a terrorist attack. Yeah, well, there is that but that won’t come up for years.

Here’s a new terror vector. Using Washington, D.C. as an example, how would you weaponize open data found at: District of Columbia Open Data?

Data.gov reports there are forty states (US), forty-eight counties and cities (US), fifty-two international countries (what else would they be?), and one-hundred and sixty-four international regions with open data portals.

That’s a considerable amount of open data. Data that could be combined together to further ends not intended to improve public health and well-being.

Don’t allow the techno-jingoism of posts like: How big data can terrorize global terrorism lull you in to a false sense of security.

Anyone who can think beyond being a not-so-smart bomb or tat-tat-tat can access and use open data with free tools. Are you aware of the danger that poses?

Driving While Black (DWB) Stops Affirmed By Supreme Court [Hacker Tip]

June 21st, 2016

Justice Sotomayor captures the essence of Utah v. Strieff when she writes:

The Court today holds that the discovery of a warrant for an unpaid parking ticket will forgive a police officer’s violation of your Fourth Amendment rights. Do not be soothed by the opinion’s technical language: This case allows the police to stop you on the street, demand your identification, and check it for outstanding traffic warrants—even if you are doing nothing wrong. If the officer discovers a warrant for a fine you forgot to pay, courts will now excuse his illegal stop and will admit into evidence anything he happens to find by searching you after arresting you on the warrant. Because the Fourth Amendment should prohibit, not permit, such misconduct, I dissent.

The facts are easy enough to summarize, Edward Strieff was seen visiting a home that had been reported (but not confirmed) as a site of drug sales. Officer Frackwell, with no suspicions that Strieff had committed a crime, detained Strieff, requested his identification and was advised of a traffic warrant for his arrest. Frackwell arrested Strieff and while searching him, discovered “a baggie of methamphetamine and drug paraphernalia.”

Frackwell moved to suppress the “a baggie of methamphetamine and drug paraphernalia” since Officer Frackwell lacked even a pretense for the original stop. The Utah Supreme Court correctly agreed but the Supreme Court in this decision, written by “Justice” Thomas, disagreed.

The “exclusionary rule” has a long history but for our purposes, it suffices to say that it removes any incentive for police officers to stop people without reasonable suspicion and demand their ID, search them, etc.

It does so by excluding any evidence of a crime they discover as a result of such a stop. Or at least it did prior to Utah v. Strieff. Police officers were forced to make up some pretext for a reasonable suspicion in order to stop any given individual.

No reasonable suspicion for stop = No evidence to be used in court.

That was the theory, prior to Utah v. Strieff

Sotomayor makes clear in her dissent, this was a suspicionless stop:


This case involves a suspicionless stop, one in which the officer initiated this chain of events without justification. As the Justice Department notes, supra, at 8, many innocent people are subjected to the humiliations of these unconstitutional searches. The white defendant in this case shows that anyone’s dignity can be violated in this manner. See M. Gottschalk, Caught 119–138 (2015). But it is no secret that people of color are disproportionate victims of this type of scrutiny. See M. Alexander, The New Jim Crow 95–136 (2010). For generations, black and brown parents have given their children “the talk”— instructing them never to run down the street; always keep your hands where they can be seen; do not even think of talking back to a stranger—all out of fear of how an officer with a gun will react to them. See, e.g., W. E. B. Du Bois, The Souls of Black Folk (1903); J. Baldwin, The Fire Next Time (1963); T. Coates, Between the World and Me (2015).

By legitimizing the conduct that produces this double consciousness, this case tells everyone, white and black, guilty and innocent, that an officer can verify your legal status at any time. It says that your body is subject to invasion while courts excuse the violation of your rights. It implies that you are not a citizen of a democracy but the subject of a carceral state, just waiting to be cataloged.

We must not pretend that the countless people who are routinely targeted by police are “isolated.” They are the canaries in the coal mine whose deaths, civil and literal, warn us that no one can breathe in this atmosphere. See L. Guinier & G. Torres, The Miner’s Canary 274–283 (2002). They are the ones who recognize that unlawful police stops corrode all our civil liberties and threaten all our lives. Until their voices matter too, our justice system will continue to be anything but. (emphasis in original)

New rule: Police can stop you at any time, for no reason, demand identification, check your legal status, if you are arrested as a result of that check, any evidence seized can be used against you in court.

Police officers were very good at imagining reasonable cause for stopping people, but now even that tissue of protection has been torn away.

You are subject to arbitrary and capricious stops with no disincentive for the police. They can go fishing for evidence and see what turns up.

For all of that, I don’t see the police as our enemy. They are playing by rules as defined by others. If we want better play, such as Fourth Amendment rights, then we need enforcement of those rights.

It isn’t hard to identify the enemies of the people in this decision.


Hackers, you too can be stopped at anytime. Hackers should never carry incriminating USB drives, SIM cards, etc. If possible, everything even remotely questionable should not be in a location physically associated with you.

Remote storage of your code, booty, etc., protects it from clumsy physical seizure of local hardware and, if you are very brave, enables rapid recovery from such seizures.

Cryptome – Happy 20th Anniversary!

June 20th, 2016

cryptome-01-460

Cryptome marks 20 years, June 1996-2016, 100K dox thanx to 25K mostly anonymous doxers.



Donate $100 for the Cryptome Archive of 101,900 files from June 1996 to 25 May 2016 on 1 USB  (43.5GB). Cryptome public key.
(Search site with Google, or WikiLeaks for most not all.)

Bitcoin: 1P11b3Xkgagzex3fYusVcJ3ZTVsNwwnrBZ

Additional items on https://twitter.com/Cryptomeorg


Interesting post on fake Cryptome torrents: http://www.joshwieder.net/2015/07/cryptome-torrents-draw-concerns.html

$100 is a real bargain for the Cryptome Archive, plus you will be helping a worthy cause.

Repost the news of Cryptome 20th anniversary far and wide!

Thanks!

Clojure Gazette – New Format – Looking for New Readers

June 20th, 2016

Clojure Gazette by Eric Normand.

From the end of this essay:

Hi! The Clojure Gazette has recently changed from a list of curated links to an essay-style newsletter. I’ve gotten nothing but good comments about the change, but I’ve also noticed the first negative growth of readership since I started. I know these essays aren’t for everyone, but I’m sure there are people out there who would like the new format who don’t know about it. Would you do me a favor? Please share the Gazette with your friends!

The Biggest Waste in Our Industry is the title of the essay I link to above.

From the post:

I would like to talk about two nasty habits I have been party to working in software. Those two habits are 1) protecting programmer time and 2) measuring programmer productivity. I’m talking from my experience as a programmer to all the managers out there, or any programmer interested in process.

You can think of Eric’s essay as an update to Peopleware: Productive Projects and Teams by Tom DeMarco and Timothy Lister.

Peopleware was first published in 1987, second edition in 1999 (8 new chapters), third edition in 2013 (5 more pages than 1999 edition?).

Twenty-nine (29) years after the publication of Peopleware, managers still don’t “get” how to manage programmers (or other creative workers).

Disappointing, but not surprising.

It’s not uncommon to read position ads that describe going to lunch en masse, group activities, etc.

You would think they were hiring lemmings rather than technical staff.

If your startup founder is that lonely, check the local mission. Hire people for social activities, lunch, etc. Cheaper than hiring salaried staff. Greater variety as well. Ditto for managers with the need to “manage” someone.

Tufte-inspired LaTeX (handouts, papers, and books)

June 20th, 2016

Tufte-LaTeX – A Tufte-inspired LaTeX class for producing handouts, papers, and books.

From the webpage:

As discussed in the Book Design thread of Edward Tufte’s Ask E.T Forum, this site is home to LaTeX classes for producing handouts and books according to the style of Edward R. Tufte and Richard Feynman.

Download the latest release, browse the source, join the mailing list, and/or submit patches. Contributors are welcome to help polish these classes!

Some examples of the Tufte-LaTeX classes in action:

  • Some papers by Jason Catena using the handout class
  • A handout for a math club lecture on volumes of n-dimensional spheres by Marty Weissman
  • A draft copy of a book written by Marty Weissman using the new Tufte-book class
  • An example handout (source) using XeLaTeX with the bidi class option for the ancient Hebrew by Kirk Lowery

Caution: A Tufte-inspired LaTeX class is no substitute for professional design advice and assistance. It will help you do “better,” for some definition of “better,” but professional design is in a class of its own.

If you are interested in TeX/LaTeX tips, follow: TexTips. One of several excellent Twitter feeds by John D. Cook.

Machine Learning Yearning [New Book – Free Draft – Signup By Friday June 24th (2016)

June 20th, 2016

Machine Learning Yearning by Andrew Ng.

About Andrew Ng:

Andrew Ng is Associate Professor of Computer Science at Stanford; Chief Scientist of Baidu; and Chairman and Co-founder of Coursera.

In 2011 he led the development of Stanford University’s main MOOC (Massive Open Online Courses) platform and also taught an online Machine Learning class to over 100,000 students, leading to the founding of Coursera. Ng’s goal is to give everyone in the world access to a great education, for free. Today, Coursera partners with some of the top universities in the world to offer high quality online courses, and is the largest MOOC platform in the world.

Ng also works on machine learning with an emphasis on deep learning. He founded and led the “Google Brain” project which developed massive-scale deep learning algorithms. This resulted in the famous “Google cat” result, in which a massive neural network with 1 billion parameters learned from unlabeled YouTube videos to detect cats. More recently, he continues to work on deep learning and its applications to computer vision and speech, including such applications as autonomous driving.

Haven’t you signed up yet?

OK, What You Will Learn:

The goal of this book is to teach you how to make the numerous decisions needed with organizing a machine learning project. You will learn:

  • How to establish your dev and test sets
  • Basic error analysis
  • How you can use Bias and Variance to decide what to do
  • Learning curves
  • Comparing learning algorithms to human-level performance
  • Debugging inference algorithms
  • When you should and should not use end-to-end deep learning
  • Error analysis by parts

Free drafts of a new book on machine learning projects, not just machine learning, by one of the leading world experts on machine learning.

Now are you signed up?

If you are interested in machine learning, following Andrew Ng on Twitter isn’t a bad place to start.

Be aware, however, that even machine learning experts can be mistaken. For example, Andrew tweeted, favorably, How to make a good teacher from the Economist.


Instilling these techniques is easier said than done. With teaching as with other complex skills, the route to mastery is not abstruse theory but intense, guided practice grounded in subject-matter knowledge and pedagogical methods. Trainees should spend more time in the classroom. The places where pupils do best, for example Finland, Singapore and Shanghai, put novice teachers through a demanding apprenticeship. In America high-performing charter schools teach trainees in the classroom and bring them on with coaching and feedback.

Teacher-training institutions need to be more rigorous—rather as a century ago medical schools raised the calibre of doctors by introducing systematic curriculums and providing clinical experience. It is essential that teacher-training colleges start to collect and publish data on how their graduates perform in the classroom. Courses that produce teachers who go on to do little or nothing to improve their pupils’ learning should not receive subsidies or see their graduates become teachers. They would then have to improve to survive.

The author conflates “demanding apprenticeship” with “teacher-training colleges start to collect and publish data on how their graduates perform in the classroom,” as though whatever data we collect has some meaningful relationship with teaching and/or the training of teachers.

A “demanding apprenticeship” no doubt weeds out people who are not well suited to be teachers, there is no evidence that it can make a teacher out of someone who isn’t suited for the task.

The collection of data is one of the ongoing fallacies about American education. Simply because you can collect data is no indication that it is useful and/or has any relationship to what you are attempting to measure.

Follow Andrew for his work on machine learning, not so much for his opinions on education.

Concealing the Purchase of Government Officials

June 20th, 2016

Fredreka Schouten reports in House approves Koch-backed bill to shield donors’ names the US House of Representatives, has passed a measure to conceal the purchase of government officials.

From the post:

The House approved a bill Tuesday that would bar the IRS from collecting the names of donors to tax-exempt groups, prompting warnings from campaign-finance watchdogs that it could lead to foreign interests illegally infiltrating American elections.

The measure, which has the support of House Speaker Paul Ryan, R-Wis., also pits the Obama administration against one of the most powerful figures in Republican politics, billionaire industrialist Charles Koch. Koch’s donor network channels hundreds of millions of dollars each year into groups that largely use anonymous donations to shape policies on everything from health care to tax subsidies. Its leaders have urged the Republican-controlled Congress to clamp down on the IRS, citing free-speech concerns.

The names of donors to politically active non-profit groups aren’t public information now, but the organizations still have to disclose donor information to the IRS on annual tax returns. The bill, written by Rep. Peter Roskam, R-Ill., would prohibit the tax agency from collecting names, addresses or any “identifying information” about donors.

Truth be told, however, “the House” didn’t vote in favor of H.R.5053 – Preventing IRS Abuse and Protecting Free Speech Act.

Rather, two-hundred and forty (240) identified representatives voted in favor of H.R.5053.

Two-hundred and forty representatives purchased by campaign contributions who now wish to keep their contributors secret.

Two-hundred and forty representatives who are as likely as not, guilty of criminal, financial/sexual or other forms of misconduct, that could result in their replacement.

Two-hundred and forty representatives who continue in office only so long as they are not exposed to law enforcement and the public.

Where are you going to invest your time and resources?

Showing solidarity on issues where substantive change isn’t going to happen, or taking back your government from its current purchasers?

PS: In case you think “substantive change” is possible on gun control, consider the unlikely scenario that “assault weapons” are banned from sale. So what? The ones in circulation number in the millions. Net effect of your “victory” would be exactly zero.

How do you skim through a digital book?

June 19th, 2016

How do you skim through a digital book? by Chloe Roberts.

From the post:

We’ve had a couple of digitised books that proved really popular with online audiences. Perhaps partly reflecting the interests of the global population, they’ve been about prostitutes and demons.

I’ve been especially interested in how people have interacted with these popular digitised books. Imagine how you’d pick up a book to look at in a library or bookshop. Would you start from page one, laboriously working through page by page, or would you flip through it, checking for interesting bits? Should we expect any different behaviour when people use a digital book?

We collect data on aggregate (nothing personal or trackable to our users) about what’s being asked of our digitised items in the viewer. With such a large number of views of these two popular books, I’ve got a big enough dataset to get an interesting idea of how readers might be using our digitised books.

Focusing on ‘Compendium rarissimum totius Artis Magicae sistematisatae per celeberrimos Artis hujus Magistros. Anno 1057. Noli me tangere’ (the 18th century one about demons) I’ve mapped the number of page views (horizontal axis) against page number (vertical axis, with front cover at the top), and added coloured bands to represent what’s on those pages.

Chole captured and then analyzed the reading behavior of readers on two very popular electronic titles.

She explains her second observation:

Observation 2: People like looking at pictures more than text

by suggesting the text being in Latin and German may explain the fondness for the pictures.

Perhaps, but I have heard the same observation made about Playboy magazine. ;-)

From a documentation/training perspective, Chole’s technique, for digital training materials, could provide guidance on:

  • Length of materials
  • Use of illustrations
  • Organization of materials
  • What material is habitually unread?

If critical material isn’t being read, exhorting newcomers to read more carefully, is not the answer.

If security and/or on-boarding reading isn’t happening, as shown by reader behavior, that’s your fault, not the readers.

Your call, successful staff and customers or failing staff and customers you can blame for security faults and declining sales.

Choose carefully.

Electronic Literature Organization

June 19th, 2016

Electronic Literature Organization

From the “What is E-Lit” page:

Electronic literature, or e-lit, refers to works with important literary aspects that take advantage of the capabilities and contexts provided by the stand-alone or networked computer. Within the broad category of electronic literature are several forms and threads of practice, some of which are:

  • Hypertext fiction and poetry, on and off the Web
  • Kinetic poetry presented in Flash and using other platforms
  • Computer art installations which ask viewers to read them or otherwise have literary aspects
  • Conversational characters, also known as chatterbots
  • Interactive fiction
  • Literary apps
  • Novels that take the form of emails, SMS messages, or blogs
  • Poems and stories that are generated by computers, either interactively or based on parameters given at the beginning
  • Collaborative writing projects that allow readers to contribute to the text of a work
  • Literary performances online that develop new ways of writing

The ELO showcase, created in 2006 and with some entries from 2010, provides a selection outstanding examples of electronic literature, as do the two volumes of our Electronic Literature Collection.

The field of electronic literature is an evolving one. Literature today not only migrates from print to electronic media; increasingly, “born digital” works are created explicitly for the networked computer. The ELO seeks to bring the literary workings of this network and the process-intensive aspects of literature into visibility.

The confrontation with technology at the level of creation is what distinguishes electronic literature from, for example, e-books, digitized versions of print works, and other products of print authors “going digital.”

Electronic literature often intersects with conceptual and sound arts, but reading and writing remain central to the literary arts. These activities, unbound by pages and the printed book, now move freely through galleries, performance spaces, and museums. Electronic literature does not reside in any single medium or institution.

I was looking for a recent presentation by Allison Parrish on bots when I encountered Electronic Literature Organization (ELO).

I was attracted by the bot discussion at a recent conference but as you can see, the range of activities of the ELO is much broader.

Enjoy!

“invisible entities having arcane but gravely important significances”

June 19th, 2016

Allison Parrish tweeted:

https://t.co/sXt6AqEIoZ the “Other, Format” unicode category, full of invisible entities having arcane but gravely important significances

I just could not let a tweet with:

“invisible entities having arcane but gravely important significances”

pass without comment!

As of today, one-hundred and fifty (150) such entities. All with multiple properties.

How many of these “invisible entities” are familiar to you?