Archive for the ‘Interface Research/Design’ Category

Tired of Chasing Ephemera? Open Greek and Latin Design Sprint (bids in August, 2017)

Thursday, July 27th, 2017

Tired of reading/chasing the ephemera explosion in American politics?

I’ve got an opportunity for you to contribute to a project with texts preserved by hand for thousands of years!

Design Sprint for Perseus 5.0/Open Greek and Latin

From the webpage:

We announced in June that Center for Hellenic Studies had signed a contract with Intrepid.io to conduct a design sprint that would support Perseus 5.0 and the Open Greek and Latin collection that it will include. Our goal was to provide a sample model for a new interface that would support searching and reading of Greek, Latin, and other historical languages. The report from that sprint was handed over to CHS yesterday and we, in turn, have made these materials available, including both the summary presentation and associated materials. The goal is to solicit comment and to provide potential applicants to the planned RFP with access to this work as soon as possible.

The sprint took just over two weeks and was an intensive effort. An evolving Google Doc with commentary on the Intrepid Wrap-up slides for the Center for Hellenic studies should now be visible. Readers of the report will see that questions remain to be answered. How will we represent Perseus, Open Greek and Latin, Open Philology, and other efforts? One thing that we have added and that will not change will be the name of the system that this planned implementation phase will begin: whether it is Perseus, Open Philology or some other name, it will be powered by the Scaife Digital Library Viewer, a name that commemorates Ross Scaife, pioneer of Digital Classics and a friend whom many of us will always miss.

The Intrepid report also includes elements that we will wish to develop further — students of Greco-Roman culture may not find “relevance” a helpful way to sort search reports. The Intrepid Sprint greatly advanced our own thinking and provided us with a new starting point. Anyone may build upon the work presented here — but they can also suggest alternate approaches.

The core deliverables form an impressive list:

At the moment we would summarize core deliverables as:

  1. A new reading environment that captures the basic functionality of the Perseus 4.0 reading environment but that is more customizable and that can be localized efficiently into multiple modern languages, with Arabic, Persian, German and English as the initial target languages. The overall Open Greek and Latin team is, of course, responsible for providing the non-English content. The Scaife DL Viewer should make it possible for us to localize into multiple languages as efficiently as possible.
  2. The reading environment should be designed to support any CTS-compliant collection and should be easily configured with a look and feel for different collections.
  3. The reading environment should contain a lightweight treebank viewer — we don’t need to support editing of treebanks in the reading environment. The functionality that the Alpheios Project provided for the first book of the Odyssey would be more than adequate. Treebanks are available under the label “diagram” when you double-click on a Greek word.
  4. The reading environment should support dynamic word/phrase level alignments between source text and translation(s). Here again, the The functionality that the Alpheios Project provided for the first book of the Odyssey would be adequate. More recent work implementing this functionality is visible at Tariq Yousef’s work at http://divan-hafez.com/ and http://ugarit.ialigner.com/.
  5. The system must be able to search for both specific inflected forms and for all forms of a particular word (as in Perseus 4.0) in CTS-compliant epiDoc TEI XML. The search will build upon the linguistically analyzed texts available in https://github.com/gcelano/CTSAncientGreekXML. This will enable searching by dictionary entry, by part of speech, and by inflected form. For Greek, the base collection is visible at the First Thousand Years of Greek website (which now has begun to accumulate a substantial amount of later Greek). CTS-compliant epiDoc Latin texts can be found at https://github.com/OpenGreekAndLatin/csel-dev/tree/master/data and https://github.com/PerseusDL/canonical-latinLit/tree/master/data.
  6. The system should ideally be able to search Greek and Latin that is available only as uncorrected OCR-generated text in hOCR format. Here the results may follow the image-front strategy familiar to academics from sources such as Jstor. If it is not feasible to integrate this search within the three months of core work, then we need a plan for subsequent integration that Leipzig and OGL members can implement later.
  7. The new system must be scalable and updating from Lucene to Elasticsearch is desirable. While these collections may not be large by modern standards, they are substantial. Open Greek and Latin currently has c. 67 million words of Greek and Latin at various stages of post-processing and c. 90 million words of addition translations from Greek and Latin into English,French, German and Italian, while the Lace Greek OCR Project has OCR-generated text for 1100 volumes.
  8. The system integrate translations and translation alignments into the searching system, so that users can search either in the original or in modern language translations where we provide this data. This goes back to work by David Bamman in the NEH-funded Dynamic Lexicon Project (when he was a researcher at Perseus at Tufts). For more recent examples of this, see http://divan-hafez.com/ and Ugarit. Note that one reason to adopt CTS URNs is to simplify the task of display translations of source texts — the system is only responsible for displaying translations insofar as they are available via the CTS API.
  9. The system must provide initial support for a user profile. One benefit of the profile is that users will be able to define their own reading lists — and the Scaife DL Viewer will then be able to provide personalized reading support, e.g., word X already showed up in your reading at places A, B, and C, while word Y, which is new to you, will appear 12 times in the rest of your planned readings (i.e., you should think about learning that word). By adopting the CTS data model, we can make very precise reading lists, defining precise selections from particular editions of particular works. We also want to be able to support an initial set of user contributions that are (1) easy to implement technically and (2) easy for users to understand and perform. Thus we would support fixing residual data entry errors, creating alignments between source texts and translations, improving automated part of speech tagging and lemmatization but users would go to external resources to perform more complex tasks such as syntactic markup (treebanking).
  10. We would welcome a bids that bring to bear expertise in the EPUB format and that could help develop a model for representing for representing CTS-compliant Greek and Latin sources in EPUB as a mechanism to make these materials available on smartphones. We can already convert our TEI XML into EPUB. The goal here is to exploit the easiest ways to optimize the experience. We can, for example, convert one or more of our Greek and Latin lexica into the EPUB Dictionary format and use our morphological analyses to generate links from particular forms in a text to the right dictionary entry or entries. Can we represent syntactically analyzed sentences with SVG? Can we include dynamic translation alignments?
  11. Bids should consider including a design component. We were very pleased with the Design Sprint that took place in July 2017 and would like to include a follow-up Design Sprint in early 2018 that will consider (1) next steps for Greek and Latin and (2) generalizing our work to other historical languages. This Design Sprint might well go to a separate contractor (thus providing us also with a separate point of view on the work done so far).
  12. Work must be build upon the Canonical Text Services Protocol. Bids should be prepared to build upon https://github.com/Capitains, but should also be able to build upon other CTS servers (e.g., https://github.com/ThomasK81/LightWeightCTSServer and cts.informatik.uni-leipzig.de).
  13. All source code must be available on Github under an appropriate open license so that third parties can freely reuse and build upon it.
  14. Source code must be designed and documented to facilitate actual (not just legally possible) reuse.
  15. The contractor will have the flexibility to get the job done but will be expected to work as closely as possible with, and to draw wherever possible upon the on-going work done by, the collaborators who are contributing to Open Greek and Latin. The contractor must have the right to decide how much collaboration makes sense.

You can use your data science skills to sell soap, cars, ED treatments, or even apocalyptically narcissistic politicians, or, you can advance Perseus 5.0.

Your call.

Good visualizations optimize for the human visual system

Saturday, December 31st, 2016

How Humans See Data by John Rauser.

Apologies to John for stepping on his title but at time mark 3:26, he says:

Good visualizations optimize for the human visual system.

That one insight sets a basis for distinguishing between good visualizations and bad ones.

Do watch the rest of the video, it is all as good as that moment.

What’s your favorite moment?

From the description:

John Rauser explains a few of the most important results from research into the functioning of the human visual system and the question of how humans decode information presented in graphical form. By understanding and applying this research when designing statistical graphics, you can simplify difficult analytical tasks as much as possible.

Links:

R/GGplot2 code for all plots in presentation.

Slides for Good visualizations optimize for the human visual system

Graphical Perception and Graphical Methods for Analyzing Scientific Data by William S. Cleveland and Robert McGill. (cited in the presentation)

The Elements of Graphing Data by William S. Cleveland. (also cited in the presentation)

Colorblind-Friendly Graphics

Tuesday, July 19th, 2016

Three tools to help you make colorblind-friendly graphics by Alex Duner.

From the post:

I am one of the 8% of men of Northern European descent who suffers from red-green colorblindness. Specifically, I have a mild case of protanopia (also called protanomaly), which means that my eyes lack a sufficient number of retinal cones to accurately see red wavelengths. To me some purples appear closer to blue; some oranges and light greens appear closer to yellow; dark greens and brown are sometimes indistinguishable.

Most of the time this has little impact on my day-to-day life, but as a news consumer and designer I often find myself struggling to read certain visualizations because my eyes just can’t distinguish the color scheme. (If you’re not colorblind and are interested in experiencing it, check out Dan Kaminsky’s iPhone app DanKam which uses augmented reality to let you experience the world through different color visions.)

As information architects, data visualizers and web designers, we need to make our work accessible to as many people as possible, which includes people with colorblindness.

Alex is writing from a journalism perspective but accessibility is a concern for any information delivery system.

A pair of rather remarkable tools, Vischeck, simulates colorblindness on your images and Daltonize, “corrects” images for colorblind users will be useful in vetting your graphics. Both are available at: http://www.vischeck.com/. Plugins for Photoshop (Win/Mac/ImageJ).

Loren Petrich has a collection of resources, including filters for GIMP to simulate colorblindness at: Color-Blindness Simulators.

HyperTerm (Not Windows HyperTerm)

Monday, July 18th, 2016

HyperTerm

Tersely by Nat Torkington as:

— an open source in-browser terminal emulator.

That’s fair, but the project goals read:

The goal of the project is to create a beautiful and extensible experience for command-line interface users, built on open web standards.

In the beginning, our focus will be primarily around speed, stability and the development of the correct API for extension authors.

In the future, we anticipate the community will come up with innovative additions to enhance what could be the simplest, most powerful and well-tested interface for productivity.

JS/HTML/CSS Terminal. Visit HyperTerm for a rocking demo!

Scroll down after the demo to see more.

Looking forward to a Linux package being released!

Doom as a tool for system administration (1999) – Pen Testing?

Saturday, April 23rd, 2016

Doom as a tool for system administration by Dennis Chao.

From the webpage:

As I was listening to Anil talk about daemons spawning processes and sysadmins killing them, I thought, “What a great user interface!” Imagine running around with a shotgun blowing away your daemons and processes, never needing to type kill -9 again.

In Doom: The Aftermath you will find some later references, the most recent being from 2004.

You will have better luck at the ACM Digital library entry for Doom as an interface for process management that lists 29 subsequent papers citing Chao’s work on Doom. Latest is 2015.

If system administration with a Doom interface sounds cool, imagine a Doom hacking interface.

I can drive a car but I don’t set the timing, adjust the fuel injection, program the exhaust controls to beat inspectors, etc.

A higher level of abstraction for tools carries a cost but advantages as well.

Imagine cadres of junior high/high school students competing in pen testing contests.

Learning a marketable skill and helping cash-strapped IT departments with security testing.

Isn’t that a win-win situation?

Courses -> Texts: A Hidden Relationship

Monday, March 28th, 2016

Quite by accident I discovered the relationship between courses and their texts is hidden in many (approx. 2000) campus bookstore interfaces.

If you visit a physical campus bookstore you can browse courses for their textbooks. Very useful if you are interested the subject but not taking the course.

An online LLM (master’s of taxation) flyer prompted me to check the textbooks for the course work.

A simple enough information request. Find the campus bookstore and browse by course for text listings.

Not so fast!

The online presences of over 1200 campus bookstores are delivered http://www.bkstr.com/, which offers this interface:

bookstore-campus

Another 748 campus bookstores are delivered by http://bncollege.com/, with a similar interface for textbooks:

harvard-yale

I started this post by saying the relationship between courses and their texts is hidden, but that’s not quite right.

The relationship between a meaningless course number and its required/suggested text is visible, but the identification of a course by a numeric string is hardly meaningful to the casual observer. (read not an enrolled student)

Perhaps better to say that a meaningful identification of courses for non-enrolled students and their relationship to required/suggested texts is absent.

That is the relationship of course -> text is present, but not in a form meaningful to anyone other than a student in that course.

Considering two separate vendors across almost 2,000 bookstores deliberately obscure the course -> text relationship, who has to wonder why?

I don’t have any immediate suggestions but when I encounter systematic obscuring of information across vendors, alarm bells start to go off.

Just for completeness sake, you can get around the obscuring of the course -> text relationship by searching for syllabus LLM taxation income OR estate OR corporate or (school name) syllabus LLM taxation income OR estate OR corporate. Extract required/suggested texts from posted syllabi.

PS: If you can offer advice on bookstore interfaces suggest enabling the browsing of courses by name and linking to the required/suggested texts.


During the searches I made writing this post, I encountered a syllabus on basic tax by Prof. Bret Wells which has this quote by Martin D. Ginsburg:

Basic tax, as everyone knows, is the only genuinely funny subject in law school.

Tax law does have an Alice in Wonderland quality about it, but The Hunting of the Snark: an Agony in Eight Fits is probably the closer match.

…but not if they have to do anything

Thursday, February 25th, 2016

Americans want to be safer online – but not if they have to do anything by Bill Camarda.

From the post:

In the wake of non-stop news about identity theft, malware, ransomware, and all manner of information security catastrophes, Americans have educated themselves and are fully leveraging today’s powerful technologies to keep themselves safe… not.

While 67% told Morar Consulting they “would like extra layers of privacy,” far fewer use the technological tools now available to them. That’s the top-line finding of a brand-new survey of 2,000 consumers by Morar on behalf of the worldwide VPN provider “Hide My Ass!”

A key related finding: 63% of survey respondents have encountered online security issues. But, among the folks who’ve been bitten, just 56% have permanently changed their online behavior afterwards. (If you don’t learn the “hard way,” when do you learn?)

According to Morar, there’s still an odd disconnect between the way some people protect themselves offline and what they’re willing to do on the web. 51% of respondents would publicly post their email addresses, 26% their home addresses, and 21% their personal phone numbers.

Does this result surprise you?

If not:

How should we judge projects/solutions that presume conscious effort by users to:

  • Encode data (think linked data and topic maps)
  • Create maps between data sets
  • Create data in formats not their own
  • Use data vocabularies not their own
  • Use software not their own
  • Improve search results
  • etc.

I mention “search results” as it is commonly admitted that search results are, at best, a pig’s breakfast. The amount of improvement possible over current search results is too large to even be guesstimated.

Rather than beat the dead horse, “…users ought to…,” yes, they should, but they don’t, it is better to ask “Now what?”

Why not try metrics?

Monitor user interactions with information and test systems to anticipate those needs. Both are measurable categories.

Consider that back in the day, indexes never indexed everything. Magazine indexes omitted ads for example. Could have been indexed but indexing ads didn’t offer enough return for the effort required.

Why not apply that model to modern information systems? Yes, we can create linked data or other representations for everything in every post, but if no one uses 90% of that encoding, we have spent a lot of money for very little gain.

Yes, that means we will be discriminating against less often cited authors, for example. And your point?

The preservation of the Greek literature discriminated against authors whose work wasn’t important enough for someone to invest in preserving it.

Of course, we may not lose data in quite the same way but if it can’t be found, isn’t that the same a being lost?

Let’s apply metrics to information retrieval and determine what return justifies the investment to make information easily available.

Consign/condemn the rest of it to search.

All Talk and No Buttons: The Conversational UI

Wednesday, February 24th, 2016

All Talk and No Buttons: The Conversational UI by Matty Mariansky.

From the post:

We’re witnessing an explosion of applications that no longer have a graphical user interface (GUI). They’ve actually been around for a while, but they’ve only recently started spreading into the mainstream. They are called bots, virtual assistants, invisible apps. They can run on Slack, WeChat, Facebook Messenger, plain SMS, or Amazon Echo. They can be entirely driven by artificial intelligence, or there can be a human behind the curtain.

Not to put too sharp a point on it but they used to be called sales associates or sales clerks, if you imagine a human being behind the curtain.

Since they are no longer visible in distinctive clothing, you have the task of creating a UI that isn’t quite as full bandwidth as human to human proximity but still useful.

A two part series that will have you thinking more seriously about what a conversational UI might look like.

Enjoy!

SnowCrew: Volunteer to Help Your Neighbors [Issue Tracking For Snow Shoveling]

Sunday, January 24th, 2016

SnowCrew: Volunteer to Help Your Neighbors

From the post:

Here’s how to see who needs help shoveling near you:
  1. Zoom into the map on the left (below on mobile) to where you live or want to help shovel
  2. When you locate someone nearby, click on the issue for more information
  3. Click on the link on this issue to be taken to the issue on SeeClickFix
  4. While on the issue in SeeClickFix, leave a comment to let the person who requested help, and other volunteers know you are heading over to help.
  5. When you are done, go back to the issue and close it so the person who made the request and other volunteers know it is complete.
  6. Give yourself a Hi5 for being an awesome neighbor!

Disclaimer: By volunteering, you do so at your own risk.

A great illustration of a simple interface.

Compare and contrast with topic map interfaces where an errant select or keystroke, opens up new, possibly duplicated options.

If our “working memory” can only hold up to 7 items, what is the result of inflicting more seven options on users?

Pay attention to the next time you use a complex application, like a word processor or spreadsheet. Some people do quite complex operations with them but day to day, how many options do you use?

Certainly, a large number of options are available, when you need them, but how many do you use day to day?

I’ll tell you mine: open, close, save, search/replace, copy, paste, insert and I use what has been described as a “thermonuclear word processor.” 😉

It has more options than MS Word but I don’t have to use them unless needed.

That’s the trick isn’t it? To expose users to the options they need, but only when needed and not before.

A topic map interface that requires me to choose between Berkeley and Hume on causation (assuming I remember the arguments clearly), isn’t going to be popular or terribly useful.

O’Reilly Web Design Site

Wednesday, December 16th, 2015

O’Reilly Web Design Site

O’Reilly has launched a new website devoted to website design.

Organized by paths, what I have encountered so far is “free” for the price of registration.

I have long ignored web design much the same way others ignore the need for documentation. Perhaps there is more similarity there than I would care to admit.

It’s never too late to learn so I am going to start pursuing some of the paths at the O’Reilly Web Design site.

Suggestions or comments concerning your experience with this site welcome.

Enjoy!

We Know How You Feel [A Future Where Computers Remain Imbeciles]

Wednesday, December 16th, 2015

We Know How You Feel by Raffi Khatchadourian.

From the post:

Three years ago, archivists at A.T. & T. stumbled upon a rare fragment of computer history: a short film that Jim Henson produced for Ma Bell, in 1963. Henson had been hired to make the film for a conference that the company was convening to showcase its strengths in machine-to-machine communication. Told to devise a faux robot that believed it functioned better than a person, he came up with a cocky, boxy, jittery, bleeping Muppet on wheels. “This is computer H14,” it proclaims as the film begins. “Data program readout: number fourteen ninety-two per cent H2SOSO.” (Robots of that era always seemed obligated to initiate speech with senseless jargon.) “Begin subject: Man and the Machine,” it continues. “The machine possesses supreme intelligence, a faultless memory, and a beautiful soul.” A blast of exhaust from one of its ports vaporizes a passing bird. “Correction,” it says. “The machine does not have a soul. It has no bothersome emotions. While mere mortals wallow in a sea of emotionalism, the machine is busy digesting vast oceans of information in a single all-encompassing gulp.” H14 then takes such a gulp, which proves overwhelming. Ticking and whirring, it begs for a human mechanic; seconds later, it explodes.

The film, titled “Robot,” captures the aspirations that computer scientists held half a century ago (to build boxes of flawless logic), as well as the social anxieties that people felt about those aspirations (that such machines, by design or by accident, posed a threat). Henson’s film offered something else, too: a critique—echoed on television and in novels but dismissed by computer engineers—that, no matter a system’s capacity for errorless calculation, it will remain inflexible and fundamentally unintelligent until the people who design it consider emotions less bothersome. H14, like all computers in the real world, was an imbecile.

Today, machines seem to get better every day at digesting vast gulps of information—and they remain as emotionally inert as ever. But since the nineteen-nineties a small number of researchers have been working to give computers the capacity to read our feelings and react, in ways that have come to seem startlingly human. Experts on the voice have trained computers to identify deep patterns in vocal pitch, rhythm, and intensity; their software can scan a conversation between a woman and a child and determine if the woman is a mother, whether she is looking the child in the eye, whether she is angry or frustrated or joyful. Other machines can measure sentiment by assessing the arrangement of our words, or by reading our gestures. Still others can do so from facial expressions.

Our faces are organs of emotional communication; by some estimates, we transmit more data with our expressions than with what we say, and a few pioneers dedicated to decoding this information have made tremendous progress. Perhaps the most successful is an Egyptian scientist living near Boston, Rana el Kaliouby. Her company, Affectiva, formed in 2009, has been ranked by the business press as one of the country’s fastest-growing startups, and Kaliouby, thirty-six, has been called a “rock star.” There is good money in emotionally responsive machines, it turns out. For Kaliouby, this is no surprise: soon, she is certain, they will be ubiquitous.

This is a very compelling look at efforts that have in practice made computers more responsive to the emotions of users. With the goal of influencing users based upon the emotions that are detected.

Sound creepy already?

The article is fairly long but a great insight into progress already being made and that will be made in the not too distant future.

However, “emotionally responsive machines” remain the same imbeciles as they were in the story of H14. That is to say they can only “recognize” emotions much as they can “recognize” color. To be sure it “learns” but its reaction upon recognition remains a matter of programming and/or training.

The next wave of startups will create programmable emotional images of speakers, edging the arms race for privacy just another step down the road. If I were investing in startups, I would concentrate on those to defeat emotional responsive computers.

If you don’t want to wait for a high tech way to defeat emotionally responsive computers, may I suggest a fairly low tech solution:

Wear a mask!

One of my favorites:

Egyptian_Guy_Fawkes_Mask

(From https://commons.wikimedia.org/wiki/Category:Masks_of_Guy_Fawkes. There are several unusual images there.)

Or choose any number of other masks at your nearest variety store.

A hard mask that conceals your eyes and movement of your face will defeat any “emotionally responsive computer.”

If you are concerned about your voice giving you away, search for “voice changer” for over 4 million “hits” on software to alter your vocal characteristics. Much of it for free.

Defeating “emotionally responsive computers” remains like playing checkers against an imbecile. If you lose, it’s your own damned fault.

PS: If you have a Max Headroom type TV and don’t want to wear a mask all the time, consider this solution for its camera:

120px-Cutting_tool_2

Any startups yet based on defeating the Internet of Things (IoT)? Predicting 2016/17 will be the year for those to take off.

Paradise Lost (John MILTON, 1608 – 1674) Audio Version

Thursday, December 10th, 2015

Paradise Lost (John MILTON, 1608 – 1674) Audio Version.

As you know, John Milton was blind when he wrote Paradise Lost. His only “interface” for writing, editing and correcting was aural.

Shoppers and worshipers need to attend very closely to the rhetoric of the season. Listening to Paradise Lost even as Milton did, may sharpen your ear for rhetorical devices and words that would otherwise pass unnoticed.

For example, what are the “good tidings” of Christmas hymns? Are they about the “…new born king…” or are they anticipating the sacrifice of that “…new born king…” instead of ourselves?

The first seems traditional and fairly benign, the second, seems more self-centered and selfish than the usual Christmas holiday theme.

If you think that is an aberrant view of the holiday, consider that in A Christmas Carol by Charles Dickens, that Scrooge, spoiler alert, ends the tale by keeping Christmas in his heart all year round.

One of the morals being that we should treat others kindly and with consideration every day of the year. Not as some modern Christians do, half-listening at an hour long service once a week and spending the waking portion of the other 167 hours not being Christians.

Paradise Lost is a complex and nuanced text. Learning to spot its rhetorical moves and devices will make you a more discerning observer of modern discourse.

Enjoy!

The Utopian UI Architect [the power of representation]

Saturday, November 28th, 2015

The Utopian UI Architect by John Pavlus.

Following all the links and projects mentioned in this post will take some time but the concluding paragraph will provide enough incentive:


“The example I like to give is back in the days of Roman numerals, basic multiplication was considered this incredibly technical concept that only official mathematicians could handle,” he continues. “But then once Arabic numerals came around, you could actually do arithmetic on paper, and we found that 7-year-olds can understand multiplication. It’s not that multiplication itself was difficult. It was just that the representation of numbers — the interface — was wrong.”

Imagine that. A change in representation changed multiplication from a professional activity to one for 7-year olds.

Now that is testimony to the power of representation.

What other representations, common logic, RDF, category theory, compilers, etc., are making those activities more difficult than necessary?

There are no known or general answer to that question but Bret Victor’s work may spark clues from others.

A must read!

I first saw this in a tweet by Max Roser.

Can Good Web Design Create Joyful User Experiences? [Is Friction Good for the Soul?]

Tuesday, November 3rd, 2015

Can Good Web Design Create Joyful User Experiences? by Daniel O’Neil.

From the post:

The next revolution in web design is Joy.

Karen Holtzblatt, who is one of the creators of modern interaction design, argues that the discussion about interaction design needs to change to focus more on the idea of “Joy,”—for want of a better word—both in life and in use.

What does this look like for users of sites? Well, in short, the fundamental role of website and app designers is to help users avoid doing anything hard at all.

And yet we don’t always want things to be easy; in fact if everything is easy, the sense of accomplishment in life can be lost. Jesse Schell recently gave a talk called “Lessons in Game Design” that explores this idea. In Schell’s talk, he gives a lot of examples of people who seek out—in fact, expect—challenges in their gaming experience, even if they were not easy. Schell argues that many games cannot be good unless such challenges exist, largely because games need to appeal to the core facets of self-determination theory.

I am quite intrigued by the discussion of “friction:”


The first concept is friction. Any effort we take as human beings involves specific steps, be they throwing off the covers when we wake up to browsing a website. The feeling of fulfillment is in the stated goal or objective at that moment in time. When there is friction in the steps to achieve that goal, the effort to accomplish it increases it, but more importantly the steps are a distraction from the specific accomplishment. If, for example, I wanted to drive somewhere but I had to scrape ice off my windshield first, I would be experiencing friction. The step distracts from the objective.

Recalling Steve Newcomb’s metaphor of semantic friction between universes of discourse.

The post goes on to point out that some “friction” may not be viewed as an impediment. Can be an impediment but a particular user may not see it that way.

Makes me wonder if information systems (think large search engines and their equally inept cousins, both electronic and paper) are inefficient and generate friction on purpose?

To give their users a sense of accomplishment by wrangling a sensible answer from a complex (to an outsider) data set.

I haven’t done any due diligence on that notion but it is something I will try to follow up on.

Perhaps topic maps need to reduce “semantic friction” gradually or only in some cases. Make sure that users still feel like they are accomplishing something.

Would enabling users to contribute to a mapping or “tweeting” results to co-workers generate a sense of accomplishment? Hard to say without testing.

Certainly broadens software design parameters beyond not failing and/or becoming a software pestilence like Adobe Flash.

Do one thing…

Monday, November 2nd, 2015

Do one thing… I don’t want barely distinguishable tools that are mediocre at everything; I want tools that do one thing and do it well. by Mike Loukides.

From the post:

I’ve been lamenting the demise of the Unix philosophy: tools should do one thing, and do it well. The ability to connect many small tools is better than having a single tool that does everything poorly.

That philosophy was great, but hasn’t survived into the Web age. Unfortunately, nothing better has come along to replace it. Instead, we have “convergence”: a lot of tools converging on doing all the same things poorly.

The poster child for this blight is Evernote. I started using Evernote because it did an excellent job of solving one problem. I’d take notes at a conference or a meeting, or add someone to my phone list, and have to distribute those files by hand from my laptop to my desktop, to my tablets, to my phone, and to any and all other machines that I might use.

Mike takes a stick to Evernote, Gmail, Google Maps, Skype, Twitter, Flickr, Dropbox (insert your list of non-single purpose tools here), etc.

Then he offers a critical insight about web applications:

…There’s no good way to connect one Web application to another. Therefore, everything tends to be monolithic; and in a world of monolithic apps, everyone wants to build their own garden, inevitably with all the features that are in all the other gardens.

Mike mentions IFTTT, which connects web services but wants something a bit more generic.

I think of IFTTT as walkways between a designated set of walled gardens. Useful for traveling between walled gardens but not anything else.

Mike concludes:

I don’t want anyone’s walled garden. I’ve seen what’s inside the walls, and it isn’t a palace; it’s a tenement. I don’t want barely distinguishable tools that are mediocre at everything. I want tools that do one thing, and do it well. And that can be connected to each other to build powerful tools.

What single purpose tool are you developing?

How will it interact with other single purpose tools?

Interactive visual machine learning in spreadsheets

Monday, November 2nd, 2015

Interactive visual machine learning in spreadsheets by Advait Sarkar, Mateja Jamnik, Alan F. Blackwell, Martin Spott.

Abstract:

BrainCel is an interactive visual system for performing general-purpose machine learning in spreadsheets, building on end-user programming and interactive machine learning. BrainCel features multiple coordinated views of the model being built, explaining its current confidence in predictions as well as its coverage of the input domain, thus helping the user to evolve the model and select training examples. Through a study investigating users’ learning barriers while building models using BrainCel, we found that our approach successfully complements the Teach and Try system [1] to facilitate more complex modelling activities.

To assist users in building machine learning models in spreadsheets:

The user should be able to critically evaluate the quality, capabilities, and outputs of the model. We present “BrainCel,” an interface designed to facilitate this. BrainCel enables the end-user to understand:

  1. How their actions modify the model, through visualisations of the model’s evolution.
  2. How to identify good training examples, through a colour-based interface which “nudges” the user to attend to data where the model has low confidence.
  3. Why and how the model makes certain predictions, through a network visualisation of the k-nearest neighbours algorithm; a simple, consistent way of displaying decisions in an arbitrarily high-dimensional space.

A great example of going where users are spending their time, spreadsheets, as opposed to originating new approaches to data they already possess.

To get a deeper understanding of the Sarkar’s approach to users via spreadsheets as an interface, see also:

Spreadsheet interfaces for usable machine learning by Advait Sarkar.

Abstract:

In the 21st century, it is common for people of many professions to have interesting datasets to which machine learning models may be usefully applied. However, they are often unable to do so due to the lack of usable tools for statistical non-experts. We present a line of research into using the spreadsheet — already familiar to end-users as a paradigm for data manipulation — as a usable interface which lowers the statistical and computing knowledge barriers to building and using these models.

Teach and Try: A simple interaction technique for exploratory data modelling by end users by Advait Sarkar, Alan F Blackwell, Mateja Jamnik, Martin Spott.

Abstract:

The modern economy increasingly relies on exploratory data analysis. Much of this is dependent on data scientists – expert statisticians who process data using statistical tools and programming languages. Our goal is to offer some of this analytical power to end-users who have no statistical training through simple interaction techniques and metaphors. We describe a spreadsheet-based interaction technique that can be used to build and apply sophisticated statistical models such as neural networks, decision trees, support vector machines and linear regression. We present the results of an experiment demonstrating that our prototype can be understood and successfully applied by users having no professional training in statistics or computing, and that the experience of interacting with the system leads them to acquire some understanding of the concepts underlying exploratory statistical modelling.

Sarkar doesn’t mention it but while non-expert users lack skills with machine learning tools, they do have expertise with their own data and domain. Data/domain expertise that is more difficult to communicate to an expert user than machine learning techniques to the non-expert.

Comparison of machine learning expert vs. domain data expert analysis lies in the not too distant and interesting future.

I first saw this in a tweet by Felienne Hermans.

Five Design Sheet [TM Interface Design]

Wednesday, October 28th, 2015

Five Design Sheet

Blog, resources and introductory materials for the Five Design Sheet (FdS) methodology.

FdS is described more formally in:

Sketching Designs Using the Five Design-Sheet Methodology by Jonathan C. Roberts, Chris James Headleand, Panagiotis D. Ritsos. (2015)

Abstract:

Sketching designs has been shown to be a useful way of planning and considering alternative solutions. The use of lo-fidelity prototyping, especially paper-based sketching, can save time, money and converge to better solutions more quickly. However, this design process is often viewed to be too informal. Consequently users do not know how to manage their thoughts and ideas (to first think divergently, to then finally converge on a suitable solution). We present the Five Design Sheet (FdS) methodology. The methodology enables users to create information visualization interfaces through lo-fidelity methods. Users sketch and plan their ideas, helping them express different possibilities, think through these ideas to consider their potential effectiveness as solutions to the task (sheet 1); they create three principle designs (sheets 2,3 and 4); before converging on a final realization design that can then be implemented (sheet 5). In this article, we present (i) a review of the use of sketching as a planning method for visualization and the benefits of sketching, (ii) a detailed description of the Five Design Sheet (FdS) methodology, and (iii) an evaluation of the FdS using the System Usability Scale, along with a case-study of its use in industry and experience of its use in teaching.

The Five Design-Sheet (FdS) approach for Sketching Information Visualization Designs by Jonathan C. Roberts. (2011)

Abstract:

There are many challenges for a developer when creating an information visualization tool of some data for a
client. In particular students, learners and in fact any designer trying to apply the skills of information visualization
often find it difficult to understand what, how and when to do various aspects of the ideation. They need to
interact with clients, understand their requirements, design some solutions, implement and evaluate them. Thus,
they need a process to follow. Taking inspiration from product design, we present the Five design-Sheet approach.
The FdS methodology provides a clear set of stages and a simple approach to ideate information visualization
design solutions and critically analyze their worth in discussion with the client.

As written, FdS is entirely appropriate for a topic map interface, but how do you capture the subjects users do or want to talk about?

Suggestions?

Making Learning Easy by Design

Tuesday, October 20th, 2015

Making Learning Easy by Design – How Google’s Primer team approached UX by Sandra Nam.

From the post:

How can design make learning feel like less of a chore?

It’s not as easy as it sounds. Flat out, people usually won’t go out of their way to learn something new. Research shows that only 3% of adults in the U.S. spend time learning during their day.¹

Think about that for a second: Despite all the information available at our fingertips, and all the new technologies that emerge seemingly overnight, 97% of people won’t spend any time actively seeking out new knowledge for their own development.

That was the challenge at hand when our team at Google set out to create Primer, a new mobile app that helps people learn digital marketing concepts in 5 minutes or less.

UX was at the heart of this mission. Learning has several barriers to entry: you need to figure out what, where, how you want to learn, and then you need the time, money, and energy to follow through.

A short read that makes it clear that designing a learning experience is not easy or quick.

Take fair warning from:

only 3% of adults in the U.S. spend time learning during their day

when you plan on users “learning” a better way from your app or software.

Targeting 3% of a potential audience isn’t a sound marketing strategy.

Google is targeting the other 97%. Shouldn’t you too?

10,000 years of Cascadia earthquakes

Thursday, October 15th, 2015

10,000 years of Cascadia earthquakes

From the webpage:

The chart shows all 40 major earthquakes in the Cascadia Subduction Zone that geologists estimate have occurred since 9845 B.C. Scientists estimated the magnitude and timing of each quake by examining soil samples at more than 50 undersea sites between Washington, Oregon and California.

This chart is followed by:

Core sample sites 1999-2009

U.S. Geological Survey scientists studied undersea core samples of soil looking for turbidites — deposits of sediments that flow along the ocean floor during large earthquakes. The samples were gathered from more than 50 sites during cruises in 1999, 2002 and 2009.

Great maps but apparently one has nothing to do with the other.

If you mouse over the red dot closest to San Francisco, a pop-up says: “ID M9907-50BC Water Depth in Feet 10925.1972.” I suspect that may mean the water depth for the sample but without more, I can’t really say.

The fatal flaw of the presentation is the data of the second map is disconnected from the first. There may be some relationship between the two but it isn’t evident in the current presentation.

A good example of how to not display data sets on the same subject.

Text Making A Comeback As Interface?

Tuesday, September 22nd, 2015

Who Needs an Interface Anyway? Startups are piggybacking on text messaging to launch services. by Joshua Brustein.

From the post:

In his rush to get his latest startup off the ground, Ethan Bloch didn’t want to waste time designing a smartphone app. He thought people would appreciate the convenience of not having to download an app and then open it every time they wanted to use Digit, a tool that promotes savings. Introduced in February, it relies on text messaging to communicate with users. To sign up for the service, users go to Digit’s website and key in their cell number and checking account number. The software analyzes spending patterns and automatically sets money aside in a savings account. To see how much you’ve socked away, text “tell me my balance.” Key in “save more,” and Digit will do as you command. “A lot of the benefit of Digit takes place in the background. You don’t need to do anything,” says Bloch.

Conventional wisdom holds that intricately designed mobile apps are an essential part of most new consumer technology services. But there are signs people are getting apped out. While the amount of time U.S. smartphone users spend with apps continues to increase, the number of apps the average person uses has stayed pretty much flat for the last two years, according to a report Nielsen published in June. Some 200 apps account for more than 70 percent of total usage.

Golden Krishna, then a designer at Cooper, a San Francisco consulting firm that helps businesses create user experiences, anticipated the onset of app fatigue. In a 2012 blog post, “The best interface is no interface,” he argued that digital technology should strive to be invisible. It sparked a wide-ranging debate, and Krishna has spent the past several years making speeches, promoting a book with the same title as his essay, and doing consulting work for Silicon Valley companies.

Remembering the near ecstasy when visual interfaces replaced green screens it goes against experience to credit text as the best interface.

However, you should start with Golden Krishna’s essay, “The best interface is no interface,” then move on to his keynote address: “The best interface is no interface” at SXSW 2013 and of course, his website, http://www.nointerface.com/book/, which has many additional resources, including his book by the same name.

It is way cool to turn a blog post into a cottage industry. Not just any blog post, but a very good blog post on a critical issue for every user facing software application.

To further inspire you to consider text as an interface, take special note of the line that reads:

“Some 200 apps account for more than 70 percent of total usage.”

In order to become a top app, you have to not only displace one of the top 200 app, but your app has to be chosen to replace it. That sounds like an uphill battle.

Not to say that making a text interface is going to be easy, it’s not. You will have to think about the interface more than grabbing some stock widgets in order to build a visual interface.

On the upside, you may avoid the design clunkers that litter Krishna’s presentations and book.

An even better upside, you may avoid authoring one of the design clunkers that litter Krishna’s presentations.

I first saw this in a tweet by Bob DuCharme.

The Chaos Ladder

Tuesday, June 30th, 2015

The Chaos Ladder – A visualization of Game of Thrones character appearences by Patrick Gillespie

From the webpage:

What is this?

A visualization of character appearences on HBO’s Game of Thrones TV series.

  • Hover over a character to get more information.
  • Slide the timeline to see how things have changed over time. You can do this with your mouse or the arrow keys on your keyboard.

If you prefer something a bit more entertaining for the long holiday weekend, check out this visualization of characters from the Game of Thrones. on HBO. (Personally I prefer the book version.)

There are a number of modeling challenges in this tale. For example, how would you model the various relationships of Cersei Lannister and who knew about which relationships when?

Anyone modeling intelligence data should find that a warm up exercise. 😉

Enjoy!

Interactive Data Visualization…

Tuesday, June 30th, 2015

Interactive Data Visualization using D3.js, DC.js, Nodejs and MongoDB by Anmol Koul.

From the post:

The aim behind this blog post is to introduce open source business intelligence technologies and explore data using open source technologies like D3.js, DC.js, Nodejs and MongoDB.

Over the span of this post we will see the importance of the various components that we are using and we will do some code based customization as well.

The Need for Visualization:

Visualization is the so called front-end of modern business intelligence systems. I have been around in quite a few big data architecture discussions and to my surprise i found that most of the discussions are focused on the backend components: the repository, the ingestion framework, the data mart, the ETL engine, the data pipelines and then some visualization.

I might be biased in favor of the visualization technologies as i have been working on them for a long time. Needless to say visualization is as important as any other component of a system. I hope most of you will agree with me on that. Visualization is instrumental in inferring the trends from the data, spotting outliers and making sense of the data-points.

What they say is right, A picture is indeed worth a thousand words.

The components of our analysis and their function:

D3.js: A javascript based visualization engine which will render interactive charts and graphs based on the data.

Dc.js: A javascript based wrapper library for D3.js which makes plotting the charts a lot easier.

Crossfilter.js: A javascript based data manipulation library. Works splendid with dc.js. Enables two way data binding.

Node JS: Our powerful server which serves data to the visualization engine and also hosts the webpages and javascript libraries.

Mongo DB: The resident No-SQL database which will serve as a fantastic data repository for our project.

[I added links to the components.]

A very useful walk through of interactive data visualization using open source tools.

It does require a time investment on your part but you will be richly rewarded with skills, ideas and new ways of thinking about visualizing your data.

Enjoy!

@alt_text_bot

Monday, April 20th, 2015

@alt_text_bot automatic text descriptions of images on Twitter by Cameron Cundiff

From the post:

Twitter is an important part of public discourse. As it becomes more and more image heavy, people who are blind are left out of the conversation. That’s where Alt-Bot comes in. Alt-Bot fills the gaps in image content using an image recognition API to add text descriptions.

The inspiration for the format of the message is a tweet by @stevefaulkner, in which he adds alt text to a retweet.

If accessibility isn’t high on your radar, imagine an adaptation of the same technique that recognizes sexual images and warns managers and diplomats of possible phishing scams.

Spread the word!

I first saw this in a tweet by Steve Faulkner.

UI Events (Formerly DOM Level 3 Events) Draft Published

Thursday, March 19th, 2015

UI Events (Formerly DOM Level 3 Events) Draft Published

From the post:

The Web Applications Working Group has published a Working Draft of UI Events (formerly DOM Level 3 Events). This specification defines UI Events which extend the DOM Event objects defined in DOM4. UI Events are those typically implemented by visual user agents for handling user interaction such as mouse and keyboard input. Learn more about the Rich Web Client Activity.

If you are planning on building rich web clients, now would be the time to start monitoring W3C drafts in this area. To make sure your use cases are met.

People have different expectations with regard to features and standards quality. Make sure your expectations are heard.

Interactive Intent Modeling: Information Discovery Beyond Search

Wednesday, March 18th, 2015

Interactive Intent Modeling: Information Discovery Beyond Search by Tuukka Ruotsalo, Giulio Jacucci, Petri Myllymäki, Samuel Kaski.

From the post:

Combining intent modeling and visual user interfaces can help users discover novel information and dramatically improve their information-exploration performance.

Current-generation search engines serve billions of requests each day, returning responses to search queries in fractions of a second. They are great tools for checking facts and looking up information for which users can easily create queries (such as “Find the closest restaurants” or “Find reviews of a book”). What search engines are not good at is supporting complex information-exploration and discovery tasks that go beyond simple keyword queries. In information exploration and discovery, often called “exploratory search,” users may have difficulty expressing their information needs, and new search intents may emerge and be discovered only as they learn by reflecting on the acquired information. 8,9,18 This finding roots back to the “vocabulary mismatch problem” 13 that was identified in the 1980s but has remained difficult to tackle in operational information retrieval (IR) systems (see the sidebar “Background”). In essence, the problem refers to human communication behavior in which the humans writing the documents to be retrieved and the humans searching for them are likely to use very different vocabularies to encode and decode their intended meaning. 8,21

Assisting users in the search process is increasingly important, as everyday search behavior ranges from simple look-ups to a spectrum of search tasks 23 in which search behavior is more exploratory and information needs and search intents uncertain and evolving over time.

We introduce interactive intent modeling, an approach promoting resourceful interaction between humans and IR systems to enable information discovery that goes beyond search. It addresses the vocabulary mismatch problem by giving users potential intents to explore, visualizing them as directions in the information space around the user’s present position, and allowing interaction to improve estimates of the user’s search intents.

What!? All those years spend trying to beat users into learning complex search languages were in vain? Say it’s not so!

But, apparently it is so. All of the research on “vocabulary mismatch problem,” “different vocabularies to encode and decode their meaning,” has come back to bite information systems that offer static and author-driven vocabularies.

Users search best, no surprise, through vocabularies they recognize and understand.

I don’t know of any interactive topic maps in the sense used here but that doesn’t mean that someone isn’t working on one.

A shift in this direction could do wonders for the results of searches.

Futures of text

Wednesday, March 4th, 2015

Futures of text by Jonathan Libov.

From the post:

I believe comfort, not convenience, is the most important thing in software, and text is an incredibly comfortable medium. Text-based interaction is fast, fun, funny, flexible, intimate, descriptive and even consistent in ways that voice and user interface often are not. Always bet on text:

Text is the most socially useful communication technology. It works well in 1:1, 1:N, and M:N modes. It can be indexed and searched efficiently, even by hand. It can be translated. It can be produced and consumed at variable speeds. It is asynchronous. It can be compared, diffed, clustered, corrected, summarized and filtered algorithmically. It permits multiparty editing. It permits branching conversations, lurking, annotation, quoting, reviewing, summarizing, structured responses, exegesis, even fan fic. The breadth, scale and depth of ways people use text is unmatched by anything.

[Apologies, I lost some of Jonathan’s layout of the quote.]

Jonathan focuses on the use of text/messaging for interactions in a mobile environment, with many examples and suggestions for improvements along the way.

One observation that will have the fearful of an AI future (Elon Musk among others) running for the hills:

Messaging is the only interface in which the machine communicates with you much the same as the way you communicate with it. If some of the trends outlined in this post pervade, it would mark a qualitative shift in how we interact with computers. Whereas computer interaction to date has largely been about discrete, deliberate events — typing in the command line, clicking on files, clicking on hyperlinks, tapping on icons — a shift to messaging- or conversational-based UI’s and implicit hyperlinks would make computer interaction far more fluid and natural.

What’s more, messaging AI benefits from an obvious feedback loop: The more we interact with bots and messaging UI’s, the better it’ll get. That’s perhaps true for GUI as well, but to a far lesser degree. Messaging AI may get better at a rate we’ve never seen in the GUI world. Hold on tight.[Emphasis added.]

Think of it this way, a GUI locks you into the developer’s imagination. A text interface empowers the user and the AI’s imagination. I’m betting on the latter.

BTW, Jonathan ends with a great list of further reading on messaging and mobile applications.

Enjoy!

I first saw this in a tweet by Aloyna Medelyan.

Typography Teardown of Advertising Age

Wednesday, February 25th, 2015

Typography Teardown of Advertising Age by Jeremiah Shoaf.

From the post:

I’m a huge fan of Samuel Hulick’s user onboarding teardowns so I thought it would be fun to try a new feature on Typewolf where I do a “typography teardown” of a popular website. I’ll review the design from a typographic perspective and discuss what makes the type work and what could potentially have been done better.

In this first edition I’m going to take a deep dive into the type behind the Advertising Age website. But first, a disclaimer.

Disclaimer: The following site was created by designers way more talented than myself. This is simply my opinion on the typography and how, at times, I may have approached things differently. Rules in typography are meant to be broken.

As you already know, I’m at least graphically challenged if not worse. 😉

Still, it doesn’t prevent me from enjoying graphics and layouts, I just have a hard time originating them. And I keep trying by reading resources such as this one.

While a website is reviewed by Jeremiah, the same principles should apply to an application interface.

Enjoy!

In Defense of the Good Old-Fashioned Map

Saturday, February 14th, 2015

In Defense of the Good Old-Fashioned Map – Sometimes, a piece of folded paper takes you to places the GPS can’t by Jason H. Harper.

A great testimonial to hard copy maps in addition to being a great read!

From the post:


But just like reading an actual, bound book or magazine versus an iPad or Kindle, you consume a real map differently. It’s easier to orient yourself on a big spread of paper, and your eye is drawn to roads and routes and green spaces you’d never notice on a small screen. A map invites time and care and observance of the details. It encourages the kind of exploration that happens in real life, when you’re out on the road, instead of the turn-by-turn rigidity of a digital device.

You can scroll or zoom with a digital map or digital representation of a topic map, but that isn’t quite the same as using a large, hard copy representation. Digital scrolling and zooming is like exploring a large scale world map through a toilet paper tube. It’s doable but I would argue it is a very different experience from a physical large scale world map.

Unless you are at a high-end visualization center or until we have walls as high resolution displays, you may want to think about production of topic maps as hard copy maps for some applications. While having maps printed isn’t cheap, it pales next to the intellectual effort that goes into constructing a useful topic map.

A physical representation of a topic map would have all the other advantages of a hard copy map. It would survive and be accessible without electrical power, it could be manually annotated, it could shared with others in the absence of computers, it could be compared to observations and/or resources, in fact it could be rather handy.

I don’t have a specific instance in mind but raise the point to keep in mind the range of topic map deliverables.

Cool Interactive experiments of 2014

Wednesday, January 14th, 2015

Cool Interactive experiments of 2014

From the post:

As we continue to look back at 2014, in search of the most interesting, coolest and useful pieces of content that came to our attention throughout the year, it’s only natural that we find projects that, despite being much less known and spoken of by the data visualization community than the ones of “The New York Times” or “The Washington Post”, have a certain “je ne sais quoi” to it, either it’s the project’s overall aesthetics, or the type of the data visualized.

Most of all, these projects show how wide the range of what visualization can be used for, outside the pressure of a client, a deadline or a predetermined tool to use. Self-promoting pieces, despite the low general value they might have, still play a determinant role in helping information designers test and expand their skills. Experimentation is at the core of this exciting time we are living in, so we gathered a couple of dozens of visual experiments that we had the opportunity to feature in our weekly “Interactive Inspiration” round ups, published every Friday.

Very impressive! I will just list the titles for you here:

  • The Hobbit | Natalia Bilenko, Asako Miyakawa
  • Periodic Table of Storytelling | James Harris
  • Graph TV | Kevin Wu
  • Beer Viz | Divya Anand, Sonali Sharma, Evie Phan, Shreyas
  • One Human Heartbeat | Jen Lowe
  • We can do better | Ri Liu
  • F1 Scope | Michal Switala
  • F1 Timeline | Peter Cook
  • The Largest Vocabulary in Hip hop | Matt Daniels
  • History of Rock in 100 Songs | Silicon Valley Data Science
  • When sparks fly | Lam Thuy Vo
  • The Colors of Motion | Charlie Clark
  • World Food Clock | Luke Twyman
  • Score to Chart | Davide Oliveri
  • Culturegraphy | Kim Albrecht
  • The Big Picture | Giulio Fagiolini
  • Commonwealth War Dead: First World War Visualised | James Offer
  • The Pianogram | Joey Cloud
  • Faces per second in episodes of House of Cards TV Series | Virostatiq
  • History Words Flow | Santiago Ortiz
  • Global Weirding | Cicero Bengler

If they have this sort of eye candy every Friday, mark me down as a regular visitor to VisualLoop.

BTW, I could have used XSLT to scrape the titles from the HTML but since there weren’t any odd line breaks, a regex in Emacs did the same thing with far fewer keystrokes.

I sometimes wonder if “interactive visualization” focuses too much on the visualization reacting to our input? After all, we are already interacting with visual stimuli in ways I haven’t seen duplicated on the image side. In that sense, reading books is an interactive experience, just on the user side.

PhantomFlow

Saturday, January 3rd, 2015

PhantomFlow

From the webpage:

PhantomFlow

UI testing with decision trees. An experimental approach to UI testing, based on Decision Trees. A NodeJS wrapper for PhantomJS, CasperJS and PhantomCSS, PhantomFlow enables a fluent way of describing user flows in code whilst generating structured tree data for visualisation.

PhantomFlow Report: Test suite overview with radial Dendrogram and pie visualisation

The above visualisation is a real-world example, showing the complexity of visual testing at Huddle.

Aims

  • Enable a more expressive way of describing user interaction paths within tests
  • Fluently communicate UI complexity to stakeholders and team members through generated visualisations
  • Support TDD and BDD for web applications and responsive web sites
  • Provide a fast feedback loop for UI testing
  • Raise profile of visual regression testing
  • Support misual regression workflows, quick inspection & rebasing via UI.

If you are planning on being more user focused (translation: successful in gaining users) this year, PhantomFlow may be the tool for you!

It strikes me as a tool that can present the workflow differently than you are accustomed to seeing it. I find that helpful because I will overlook potential difficulties because I already know how some function works.

The red button labeled STOP! may mean to a user that the application stops. Not that the decryption key on the hard drive is trashed to prevent decryption even if I give up the key under torture. That may not occur to them. If that happens on their hard drive, they may be rather miffed.