Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

May 3, 2014

Guerilla Usability Test: Yelp

Filed under: Interface Research/Design,Usability — Patrick Durusau @ 3:07 pm

Guerilla Usability Test: Yelp by Gary Yu.

Proof that usability testing doesn’t have to be overly complex or long.

Useful insights from five (5) users and a low-tech approach to analysis.

Like you, I have known organizations to spend more than a year on web design/re-design issues and demur on doing even this degree of user testing.

Lack of time, resources, expertise (they got that one right but not on the user testing issue), were the most common excuses.

There was a greatly reduced chance the executives would hear any disagreement with their design choices but I don’t consider that to be a goal in interface design.

Human Sense Making

Filed under: Bioinformatics,Interface Research/Design,Sense,Sensemaking,Workflow — Patrick Durusau @ 12:38 pm

Scientists’ sense making when hypothesizing about disease mechanisms from expression data and their needs for visualization support by Barbara Mirel and Carsten Görg.

Abstract:

A common class of biomedical analysis is to explore expression data from high throughput experiments for the purpose of uncovering functional relationships that can lead to a hypothesis about mechanisms of a disease. We call this analysis expression driven, -omics hypothesizing. In it, scientists use interactive data visualizations and read deeply in the research literature. Little is known, however, about the actual flow of reasoning and behaviors (sense making) that scientists enact in this analysis, end-to-end. Understanding this flow is important because if bioinformatics tools are to be truly useful they must support it. Sense making models of visual analytics in other domains have been developed and used to inform the design of useful and usable tools. We believe they would be helpful in bioinformatics. To characterize the sense making involved in expression-driven, -omics hypothesizing, we conducted an in-depth observational study of one scientist as she engaged in this analysis over six months. From findings, we abstracted a preliminary sense making model. Here we describe its stages and suggest guidelines for developing visualization tools that we derived from this case. A single case cannot be generalized. But we offer our findings, sense making model and case-based tool guidelines as a first step toward increasing interest and further research in the bioinformatics field on scientists’ analytical workflows and their implications for tool design.

From the introduction:

In other domains, improvements in data visualization designs have relied on models of analysts’ actual sense making for a complex analysis [2]. A sense making model captures analysts’ cumulative, looped (not linear) “process [es] of searching for a representation and encoding data in that representation to answer task-specific questions” relevant to an open-ended problem [3]: 269. As an end-to-end flow of application-level tasks, a sense making model may portray and categorize analytical intentions, associated tasks, corresponding moves and strategies, informational inputs and outputs, and progression and iteration over time. The importance of sense making models is twofold: (1) If an analytical problem is poorly understood developers are likely to design for the wrong questions, and tool utility suffers; and (2) if developers do not have a holistic understanding of the entire analytical process, developed tools may be useful for one specific part of the process but will not integrate effectively in the overall workflow [4,5].

As the authors admit, one case isn’t enough to be generalized but their methodology, with its focus on the work flow of a scientist, is a refreshing break from imagined and/or “ideal” work flows for scientists.

Until now semantic software has followed someone’s projection of an “ideal” work flow.

The next generation of semantic software should follow the actual work flows of people working with their data.

I first saw this in a tweet by Neil Saunders

April 18, 2014

A/B Tests and Facebook

Filed under: A/B Tests,Interface Research/Design,Usability,User Targeting — Patrick Durusau @ 2:32 pm

The reality is most A/B tests fail, and Facebook is here to help by Kaiser Fung.

From the post:

Two years ago, Wired breathlessly extolled the virtues of A/B testing (link). A lot of Web companies are in the forefront of running hundreds or thousands of tests daily. The reality is that most A/B tests fail.

A/B tests fail for many reasons. Typically, business leaders consider a test to have failed when the analysis fails to support their hypothesis. “We ran all these tests varying the color of the buttons, and nothing significant ever surfaced, and it was all a waste of time!” For smaller websites, it may take weeks or even months to collect enough samples to read a test, and so business managers are understandably upset when no action can be taken at its conclusion. It feels like waiting for the train which is running behind schedule.

Bad outcome isn’t the primary reason for A/B test failure. The main ways in which A/B tests fail are:

  1. Bad design (or no design);
  2. Bad execution;
  3. Bad measurement.

These issues are often ignored or dismissed. They may not even be noticed if the engineers running the tests have not taken a proper design of experiments class. However, even though I earned an A at school, it wasn’t until I started running real-world experiments that I really learned the subject. This is an area in which theory and practice are both necessary.

The Facebook Data Science team just launched an open platform for running online experiments, called PlanOut. This looks like a helpful tool to avoid design and execution problems. I highly recommend looking into how to integrate it with your website. An overview is here, and a more technical paper (PDF) is also available. There is a github page.

The rest of this post gets into some technical, sausage-factory stuff, so be warned.

For all of your software tests, do you run any A/B tests on your interface?

Or is your response to UI criticism, “…well, but all of us like it.” That’s a great test for a UI.

If you don’t read any other blog post this weekend, read Kaiser’s take on A/B testing.

April 17, 2014

Clojure Procedural Dungeons

Filed under: Clojure,Games,Interface Research/Design,Programming — Patrick Durusau @ 6:13 pm

Clojure Procedural Dungeons

From the webpage:

When making games, there are two ways to make a dungeon. The common method is to design one in the CAD tool of our choice (or to draw one in case of 2D games).

The alternative is to automatically generate random Dungeons by using a few very powerful algorithms. We could automatically generate a whole game world if we wanted to, but let’s take one step after another.

In this Tutorial we will implement procedural Dungeons in Clojure, while keeping everything as simple as possible so everyone can understand it.

Just in case you are interesting in a gaming approach for a topic maps interface.

Not as crazy as that may sound. One of the brightest CS types I ever knew spend a year playing a version of Myst from start to finish.

Think about app sales if you can make your interface addictive.

Suggestion: Populate your topic map authoring interface with trolls (accounting), smiths (manufacturing), cavalry (shipping), royalty (managment), wizards (IT), etc. and make collection of information about their information into tokens, spells, etc. Sprinkle in user preference activities and companions.

That would be a lot of work but I suspect you would get volunteers to create new levels as your information resources evolve.

April 11, 2014

Placement of Citations [Discontinuity and Users]

Filed under: Interface Research/Design,Law,Law - Sources,Legal Informatics — Patrick Durusau @ 12:53 pm

If the Judge Will Be Reading My Brief on a Screen, Where Should I Place My Citations? by Peter W. Martin.

From the post:

In a prior post I explored how the transformation of case law to linked electronic data undercut Brian Garner’s longstanding argument that judges should place their citations in footnotes. As that post promised, I’ll now turn to Garner’s position as it applies to writing that lawyers prepare for judicial readers.

brief page

Implicitly, Garner’s position assumes a printed page, with footnote calls embedded in the text and the related notes placed at the bottom. In print that entirety is visible at once. The eyes must move, but both call and footnote remain within a single field of vision. Secondly, when the citation sits inert on a printed page and the cited source is online, the decision to inspect that source and when to do so is inevitably influenced by the significant discontinuity that transaction will entail. In print, citation placement contributes little to that discontinuity. The situation is altered – significantly, it seems to me – when a brief or memorandum is submitted electronically and will most likely be read from a screen. In 2014 that is the case with a great deal of litigation.

This is NOT a discussion of interest only to lawyers and judges.

While Peter has framed the issue in terms of contrasting styles of citation, as he also points out, there is a question of “discontinuity” and I would argue comprehension for the reader in these styles.

At first blush, being a regular hypertext maven you may think that inline citations are “the way to go,” on this citation issue.

To some degree I would agree with you but leaving the current display to consult a citation or other material that could appear in a footnote, introduces another form of discontinuity.

You are no longer reading a brief prepared by someone familiar with the law and facts at hand but someone who is relying on different facts and perhaps even a different legal context for their statements.

If you are a regular reader of hypertexts, try writing down the opinion of one author on a note card, follow a hyperlink in that post to another resource, record the second author’s opinion on the same subject on a second note card and then follow a link from the second resource to a third and repeat the note card opinion recording. Set all three cards aside, with no marks to associate them with a particular author.

After two (2) days return to the cards and see if you can distinguish the card you made for the first author from the next two.

Yes, after a very short while you are unable to identify the exact source of information that you were trying to remember. Now imagine that in a legal context where facts and/or law are in dispute. Exactly how much “other” content do you want to display with your inline reference?

The same issue comes up for topic map interfaces. Do you really want to display all the information on a subject or do you want to present the user with a quick overview and enable them to choose greater depth?

Personally I would use citations with pop-ups that contain a summary of the cited authority, with a link to the fuller resource. So a judge could quickly confirm their understanding of a case without waiting for resources to load, etc.

But in any event, how much visual or cognitive discontinuity your interface is inflicting on users is an important issue.

April 4, 2014

Making Data Classification Work

Filed under: Authoring Topic Maps,Classification,Interface Research/Design — Patrick Durusau @ 7:06 pm

Making Data Classification Work by James H. Sawyer.

From the post:

The topic of data classification is one that can quickly polarize a crowd. The one side believes there is absolutely no way to make the classification of data and the requisite protection work — probably the same group that doesn’t believe in security awareness and training for employees. The other side believes in data classification as they are making it work within their environments, primarily because their businesses require it. The difficulty in choosing a side lies in the fact that both are correct.

Apologies, my quoting of James is mis-leading.

James is addressing the issue of “classification” of data in the sense of keeping information secret.

What is amazing is that the solution James proposes for “classification” in terms of what is kept secret, has a lot of resonance for “classification” in the sense of getting users to manage categories of data or documents.

One hint:

Remember how poorly even librarians use the Library of Congress subject listings? Contrast that with nearly everyone using aisle categories at the local grocery store.

You can design a topic map where experts use it poorly, or so nearly everyone be able to use it.

Your call.

March 24, 2014

Google Search Appliance and Libraries

Using Google Search Appliance (GSA) to Search Digital Library Collections: A Case Study of the INIS Collection Search by Dobrica Savic.

From the post:

In February 2014, I gave a presentation at the conference on Faster, Smarter and Richer: Reshaping the library catalogue (FSR 2014), which was organized by the Associazione Italiana Biblioteche (AIB) and Biblioteca Apostolica Vaticana in Rome, Italy. My presentation focused on the experience of the International Nuclear Information System (INIS) in using Google Search Appliance (GSA) to search digital library collections at the International Atomic Energy Agency (IAEA). 

Libraries are facing many challenges today. In addition to diminished funding and increased user expectations, the use of classic library catalogues is becoming an additional challenge. Library users require fast and easy access to information resources, regardless of whether the format is paper or electronic. Google Search, with its speed and simplicity, has established a new standard for information retrieval which did not exist with previous generations of library search facilities. Put in a position of David versus Goliath, many small, and even larger libraries, are losing the battle to Google, letting many of its users utilize it rather than library catalogues.

The International Nuclear Information System (INIS)

The International Nuclear Information System (INIS) hosts one of the world's largest collections of published information on the peaceful uses of nuclear science and technology. It offers on-line access to a unique collection of 3.6 million bibliographic records and 483,000 full texts of non-conventional (grey) literature. This large digital library collection suffered from most of the well-known shortcomings of the classic library catalogue. Searching was complex and complicated, it required training in Boolean logic, full-text searching was not an option, and response time was slow. An opportune moment to improve the system came with the retirement of the previous catalogue software and the adoption of Google Search Appliance (GSA) as an organization-wide search engine standard.
….

To be completely honest, my first reaction wasn’t a favorable one.

But even the complete blog post does not do justice to the project in question.

Take a look at the slides, which include screen shots of the new interface before reaching an opinion.

Take this as a lesson on what your search interface should be offering by default.

There are always other screens you can fill with advanced features.

March 23, 2014

How to Quickly Add Nodes and Edges…

Filed under: Authoring Topic Maps,Graphs,Interface Research/Design,Neo4j — Patrick Durusau @ 7:39 pm

How to Quickly Add Nodes and Edges to Graphs

From the webpage:

The existing interfaces for graph manipulation all suffer from the same problem: it’s very difficult to quickly enter the nodes and edges. One has to create a node, then another node, then make an edge between them. This takes a long time and is cumbersome. Besides, such approach is not really as fast as our thinking is.

We, at Nodus Labs, decided to tackle this problem using what we already do well: #hashtagging the @mentions. The basic idea is that you create the nodes and edges in something that we call a “statement”. Within this #statement you can mark the #concepts with #hashtags, which will become nodes and then mark the @contexts or @lists where you want them to appear with @mentions. This way you can create huge graphs in a matter of seconds and if you do not believe us, watch this screencast of our application below.

You can also try it online on www.infranodus.com or even install it on your local machine using our free open-source repository on http://github.com/noduslabs/infranodus.

+1! for using “…what we already do well….” for an authoring interface.

Getting any ideas for a topic map authoring interface?

March 17, 2014

Office Lens Is a Snap (Point and Map?)

Office Lens Is a Snap

From the post:

The moment mobile-phone manufacturers added cameras to their devices, they stopped being just mobile phones. Not only have lightweight phone cameras made casual photography easy and spontaneous, they also have changed the way we record our lives. Now, with help from Microsoft Research, the Office team is out to change how we document our lives in another way—with the Office Lens app for Windows Phone 8.

Office Lens, now available in the Windows Phone Store, is one of the first apps to use the new OneNote Service API. The app is simple to use: Snap a photo of a document or a whiteboard, and upload it to OneNote, which stores the image in the cloud. If there is text in the uploaded image, OneNote’s cloud-based optical character-recognition (OCR) software turns it into editable, searchable text. Office Lens is like having a scanner in your back pocket. You can take photos of recipes, business cards, or even a whiteboard, and Office Lens will enhance the image and put it into your OneNote Quick Notes for reference or collaboration. OneNote can be downloaded for free.

Less than five (5) years ago, every automated process in Office Lens would have been a configurable setting.

Today, it’s just point and shoot.

There is an interface lesson for topic maps in the Office Lens interface.

Some people will need the Office Lens API. But, the rest of us, just want to take a picture of the whiteboard (or some other display). Automatic storage and OCR are welcome added benefits.

What about a topic map authoring interface that looks a lot like MS Word™ or Open Office. A topic map is loaded much like a spelling dictionary. When the user selects “map-it,” links are inserted that point into the topic map.

Hover over such a link and data from the topic map is displayed. Can be printed, annotated, etc.

One possible feature would be “subject check” which displays the subjects “recognized” in the document. To enable the author to correct any recognition errors.

In case you are interested, I can point you to some open source projects that have general authoring interfaces. 😉

PS: If you have a Windows phone, can you check out Office Lens for me? I am still sans a cellphone of any type. Since I don’t get out of the yard a cellphone doesn’t make much sense. But I do miss out on the latest cellphone technology. Thanks!

March 16, 2014

Hadoop Alternative Hydra Re-Spawns as Open Source

Filed under: Hadoop,Hydra,Interface Research/Design,Stream Analytics — Patrick Durusau @ 7:31 pm

Hadoop Alternative Hydra Re-Spawns as Open Source by Alex Woodie.

From the post:

It may not have the name recognition or momentum of Hadoop. But Hydra, the distributed task processing system first developed six years ago by the social bookmarking service maker AddThis, is now available under an open source Apache license, just like Hadoop. And according to Hydra’s creator, the multi-headed platform is very good at some big data tasks that the yellow pachyderm struggles with–namely real-time processing of very big data sets.

Hydra is a big data storage and processing platform developed by Matt Abrams and his colleagues at AddThis (formerly Clearspring), the company that develops the Web server widgets that allow visitors to easily share something via their Twitter, Facebook, Pintrest, Google+, or Instagram accounts.

When AddThis started scaling up its business in the mid-2000s, it got flooded with data about what users were sharing. The company needed a scalable, distributed system that could deliver real-time analysis of that data to its customers. Hadoop wasn’t a feasible option at that time. So it built Hydra instead.

So, what is Hydra? In short, it’s a distributed task processing system that supports streaming and batch operations. It utilizes a tree-based data structure to store and process data across clusters with thousands of individual nodes. It features a Linux-based file system, which makes it compatible with ext3, ext4, or even ZFS. It also features a job/cluster management component that automatically allocates new jobs to the cluster and rebalance existing jobs. The system automatically replicates data and handles node failures automatically.

The tree-based structure allows it to handle streaming and batch jobs at the same time. In his January 23 blog post announcing that Hydra is now open source, Chris Burroughs, a member of AddThis’ engineering department, provided this useful description of Hydra: “It ingests streams of data (think log files) and builds trees that are aggregates, summaries, or transformations of the data. These trees can be used by humans to explore (tiny queries), as part of a machine learning pipeline (big queries), or to support live consoles on websites (lots of queries).”

To learn a lot more about Hydra, see its GitHub page.

Another candidate for “real-time processing of very big data sets.”

The reflex is to applaud improvements in processing speed. But what sort of problems require that kind of speed? I know the usual suspects, modeling the weather, nuclear explosions, chemical reactions, but at some point, the processing ends and a human reader has to comprehend the results.

Better to get information to the human reader sooner rather than later, but there is a limit to the speed at which a user can understand the results of a computational process.

From a UI perspective, what research is there on how fast/slow information should be pushed at a user?

Could make the difference between an app that is just annoying and one that is truly useful.

I first saw this in a tweet by Joe Crobak.

March 5, 2014

Dumb, Dumber, Dumbest

Filed under: Humor,Interface Research/Design — Patrick Durusau @ 2:03 pm

There are times when the lack of quality in government and other organizations seems explainable: People work there!

From recent news stories:

Dumb:

18% of people fall for phishing emails. Hacking Critical Infrastructure Companies – A Pen Tester View

Dumber:

11% of Americans think HTML is a sexually transmitted disease. 1 in 10 Americans think HTML is an STD, study finds

Dumbest:

An elementary school principal:

responded to a Craigslist advertisement over the weekend and talked with an undercover officer who posed as a child’s mother looking to arrange for a man to meet her teenage daughter. (Bond set for Douglas County principal arrested in sex sting)

It’s been a while since I was a teenager but I don’t remember any mothers taking out ads in the newspaper for their daughters. Do you?

Take this as a reminder to do realistic user testing of interfaces.

  1. Pick people at random and put them in front your interface.
  2. Take video of their efforts to use the interface for the intended task(s).
  3. Ask what about your interface confused the user?
  4. Fix the interface (do not attempt to fix the user, plenty more where that one came from)
  5. Return to step 1.

February 24, 2014

Findability and Exploration:…

Findability and Exploration: the future of search by Stijn Debrouwere.

From the introduction:

The majority of people visiting a news website don’t care about the front page. They might have reached your site from Google while searching for a very specific topic. They might just be wandering around. Or they’re visiting your site because they’re interested in one specific event that you cover. This is big. It changes the way we should think about news websites.

We need ambient findability. We need smart ways of guiding people towards the content they’d like to see — with categorization and search playing complementary goals. And we need smart ways to keep readers on our site, especially if they’re just following a link from Google or Facebook, by prickling their sense of exploration.

Pete Bell recently opined that search is the enemy of information architecture. That’s too bad, because we’re really going to need great search if we’re to beat Wikipedia at its own game: providing readers with timely information about topics they care about.

First, we need to understand a bit more about search. What is search?

A classic (2010) statement of the requirements for a “killer” app. I didn’t say “search” app because search might not be a major aspect of its success. At least if you measure success in terms of user satisfaction after using an app.

A satisfaction that comes from obtaining the content they want to see. How they got there isn’t important to them.

GenomeBrowse

Filed under: Bioinformatics,Genomics,Interface Research/Design,Visualization — Patrick Durusau @ 4:36 pm

GenomeBrowse

From the webpage:

Golden Helix GenomeBrowse® visualization tool is an evolutionary leap in genome browser technology that combines an attractive and informative visual experience with a robust, performance-driven backend. The marriage of these two equally important components results in a product that makes other browsers look like 1980s DOS programs.

Visualization Experience Like Never Before

GenomeBrowse makes the process of exploring DNA-seq and RNA-seq pile-up and coverage data intuitive and powerful. Whether viewing one file or many, an integrated approach is taken to exploring your data in the context of rich annotation tracks.

This experience features:

  • Zooming and navigation controls that are natural as they mimic panning and scrolling actions you are familiar with.
  • Coverage and pile-up views with different modes to highlight mismatches and look for strand bias.
  • Deep, stable stacking algorithms to look at all reads in a pile-up zoom, not just the first 10 or 20.
  • Context-sensitive information by clicking on any feature. See allele frequencies in control databases, functional predictions of a non-synonymous variants, exon positions of genes, or even details of a single sequenced read.
  • A dynamic labeling system which gives optimal detail on annotation features without cluttering the view.
  • The ability to automatically index and compute coverage data on BAM or VCF files in the background.

I’m very interested in seeing how the interface fares in the bioinformatics domain. Every domain is different but there may be some cross-over in term of popular UI features.

I first saw this in a tweet by Neil Saunders.

February 17, 2014

Linux Kernel Map

Filed under: Interface Research/Design,Linux OS,Maps — Patrick Durusau @ 3:41 pm

Linux Kernel Map by Antony Peel.

A very good map of the Linux Kernel.

I haven’t tried to reproduce it here because the size reduction would make it useless.

In sufficient resolution, this would make a nice interface to usenet Linux postings.

I may have to find a print shop that can convert this into a folding map version.

Enjoy!

February 12, 2014

iPhone interface design

Filed under: Interface Research/Design — Patrick Durusau @ 11:51 am

iPhone interface design by Edward Tufte.

From the post:

The iPhone platform elegantly solves the design problem of small screens by greatly intensifying the information resolution of each displayed page. Small screens, as on traditional cell phones, show very little information per screen, which in turn leads to deep hierarchies of stacked-up thin information–too often leaving users with “Where am I?” puzzles. Better to have users looking over material adjacent in space rather than stacked in time.

To do so requires increasing the information resolution of the screen by the hardware (higher resolution screens) and by screen design (eliminating screen-hogging computer administrative debris, and distributing information adjacent in space).

Tufte’s take on iPhone interface design with reader comments.

The success of the iPhone interface is undeniable. The spread of its lessons, at least to “big” screens, less so.

There are interfaces that I use where a careless click of the mouse offers a second or even third way to perform a task or at least more menus.

If you are looking for an industry with nearly unlimited potential for growth, think user interface/user experience.

I first saw this in a tweet by Gregory Piatetsky.

February 8, 2014

News Genius

Filed under: Annotation,Interface Research/Design,News,Social Networks — Patrick Durusau @ 3:52 pm

News Genius (about page)

From the webpage:

What is News Genius?

http://news.rapgenius.com/General-dwight-d-eisenhower-d-day-message-sent-just-prior-to-invasion-annotated

News Genius helps you make sense of the news by putting stories in context, breaking down subtext and bias, and crowdsourcing knowledge from around the world!

You can find speeches, interviews, articles, recipes, and even sports news, from yesterday and today, all annotated by the community and verified experts. With everything from Eisenhower speeches to reports on marijuana arrest horrors, you can learn about politics, current events, the world stage, and even meatballs!

Who writes the annotations?

Anyone can! Just create an account and start annotating. You can highlight any line to annotate it yourself, suggest changes to existing annotations, and even put up your favorite texts. Getting started is very easy. If you make good contributions, you’ll earn News IQ™, and if you share true knowledge, eventually you’ll be able to edit and annotate anything on the site.

How do I make verified annotations on my own work?

Verified users are experts in the news community. This includes journalists, like Spencer Ackerman, groups like the ACLU and Smart Chicago Collaborative, and even U.S. Geological Survey. Interested in getting you or your group verified? Sign up and request your verified account!

Sam Hunting forwarded this to my attention.

Interesting interface.

Assuming that you created associations between the text and annotator without bothering the author, this would work well for some aspects of a topic map interface.

I did run into the problem that who gets to be the “annotation” depends on who gets there first. If you pick text that has already been annotated, at most you can post a suggestion or vote it up or down.

BTW, this started as a music site so when you search for topics, there are a lot of rap, rock and poetry hits. Not so many news “hits.”

You can imagine my experience when I searched for “markup” and “semantics.”

I probably need to use more common words. 😉

I don’t know the history of the site but other than the not more than one annotation rule, you can certainly get started quickly creating and annotating content.

That is a real plus over many of the interfaces I have seen.

Comments?

PS: The only one annotation rule is all the more annoying when you find that very few Jimi Hendrix songs have any parts that are not annotated. 🙁

February 7, 2014

Self-promotion for game developers

Filed under: Interface Research/Design,Marketing — Patrick Durusau @ 3:38 pm

Self-promotion for game developers by Raph Koster.

From the post:

I’m writing this for Mattie Brice, who was just listed as one of Polygon’s 50 game newsmakers of the year.

We had a brief Twitter exchange after I offered congratulations, in which she mentioned that she didn’t know she could put this on a CV, and that she “know[s] nothing of self-promotion.” I have certainly never been accused of that, so this is a rehash of stuff I have written elsewhere and elsewhen.

To be clear, this post is not about marketing your games. It is about marketing yourself, and not even that, but about finding your professional place within the industry.

Ralph’s advice is applicable to any field. Read it but more than that, take it to heart.

While you are there, take a look at: Theory of Fun for Game Design by Raph Koster.

There are no rules that say topic map applications have to be drudgery.

I first saw this in a tweet by Julia Evans.

February 3, 2014

UX Crash Course: 31 Fundamentals

Filed under: Interface Research/Design,Library,Library software,Usability,UX — Patrick Durusau @ 10:10 am

UX Crash Course: 31 Fundamentals by Joel Marsh.

From the post:

Basic UX Principles: How to get started

The following list isn’t everything you can learn in UX. It’s a quick overview, so you can go from zero-to-hero as quickly as possible. You will get a practical taste of all the big parts of UX, and a sense of where you need to learn more. The order of the lessons follows a real-life UX process (more or less) so you can apply these ideas as-you-go. Each lesson also stands alone, so feel free to bookmark them as a reference!

Main topics:

Introduction & Key Ideas

How to Understand Users

Information Architecture

Visual Design Principles

Functional Layout Design

User Psychology

Designing with Data

Users who interact with designers, librarians and library students come to mind, would do well to review these posts. If nothing else, it will give users better questions to ask vendors about their web interface design process.

December 28, 2013

Intel® XDK

Filed under: Interface Research/Design,Javascript — Patrick Durusau @ 12:00 pm

Intel® XDK

From the webpage:

Intel® XDK, a no cost, integrated and front-to-back HTML5 app development environment for true cross-platform apps for multiple app stores, and form factor devices. Features in the first release included:

  • Editor, Device Emulator and Debugger
  • App for On-device Testing
  • Javascript UI library optimized for mobile
  • APIs for game developers with accelerated canvas
  • Prototype GUI quick-start wizard
  • Installs on Microsoft Windows*, Apple OS X*, runs in Google Chrome*
  • Intel cloud-based build system for most app stores
  • No need to download Native Platform SDKs
  • Tool to convert iOS* apps to HTML5

Numerous other resources, including forums, are available from this page.

If you want to deliver topic map based content to mobile devices, this is a must stop.

I first saw this in Nat Torkington’s Four short links: 27 December 2013.

December 24, 2013

Design, Math, and Data

Filed under: Dashboard,Data,Design,Interface Research/Design — Patrick Durusau @ 2:58 pm

Design, Math, and Data: Lessons from the design community for developing data-driven applications by Dean Malmgren.

From the post:

When you hear someone say, “that is a nice infographic” or “check out this sweet dashboard,” many people infer that they are “well-designed.” Creating accessible (or for the cynical, “pretty”) content is only part of what makes good design powerful. The design process is geared toward solving specific problems. This process has been formalized in many ways (e.g., IDEO’s Human Centered Design, Marc Hassenzahl’s User Experience Design, or Braden Kowitz’s Story-Centered Design), but the basic idea is that you have to explore the breadth of the possible before you can isolate truly innovative ideas. We, at Datascope Analytics, argue that the same is true of designing effective data science tools, dashboards, engines, etc — in order to design effective dashboards, you must know what is possible.

As founders of Datascope Analytics, we have taken inspiration from Julio Ottino’s Whole Brain Thinking, learned from Stanford’s d.school, and even participated in an externship swap with IDEO to learn how the design process can be adapted to the particular challenges of data science (see interspersed images throughout).

If you fear “some assembly required,” imagine how users feel with new interfaces.

Good advice on how to explore potential interface options.

Do you think HTML5 will lead to faster mock-ups?

See for example:

21 Fresh Examples of Websites Using HTML5 (2013)

40+ Useful HTML5 Examples and Tutorials (2012)

HTML5 Website Showcase: 48 Potential Flash-Killing Demos (2009, est.)

December 10, 2013

Paginated Collections with Ember.js + Solr + Rails

Filed under: Interface Research/Design,Solr — Patrick Durusau @ 5:11 pm

Paginated Collections with Ember.js + Solr + Rails by Eduardo Figarola.

From the post:

This time, I would like to show you how to add a simple pagination helper to your Ember.js application.

For this example, I will be using Rails + Solr for the backend and Ember.js as my frontend framework.

I am doing this with Rails and Solr, but you can do it using other backend frameworks, as long as the JSON’s response resembles what we have here:
….

I mention this just on the off-chance that you will encounter users requesting pagination.

I’m not sure anything beyond page 1 and page 2 is needed for most pagination needs.

I remember reading in a study of query behavior using PubMed, you better have a disease that appears in the first two pages of results.

Anywhere beyond the first two pages, well, your family’s best hope is that you have life insurance.

If a client asks for beyond 2 pages of results, I would suggest monitoring search query behavior for say six months.

Just to give them an idea of what beyond page two is really accomplishing.

December 9, 2013

Domino

Filed under: Cloud Computing,Interface Research/Design,R — Patrick Durusau @ 2:11 pm

San Francisco startup takes on collaborative Data Science from The R Backpages 2 by Joseph Rickert.

From the post:

Domino, a San Francisco based startup, is inviting users to sign up to beta test its vision of online, Data Science collaboration. The site is really pretty slick, and the vision of cloud computing infrastructure integrated with an easy to use collaboration interface and automatic code revisioning is compelling. Moreover, it is delightfully easy to get started with Domino. After filling out the new account form, a well thought out series of screens walks the new user through downloading the client software, running a script (R, MatLab or Python) and viewing the results online. The domino software creates a quick-start directory on your PC where it looks for scripts to run. After the installation is complete it is just a matter firing up a command window to run scripts in the cloud with:
….

Great review by Joseph on Domino an its use on a PC.

Persuaded me to do an install on my local box:

Installation on Ubuntu 12.04

  • Get a Domino Account
  • Download/Save the domino-install-unix.sh file to a convenient directory. (Just shy of 20MB.)
  • chmod -744 domino-install-unix.sh
  • ./domino-install-unix.sh
  • If you aren’t root, just ignore the symlink question. A bug but it will continue happily with the install. Tech support promptly reported that will be fixed.
  • BTW, installing from a shell window, requires a new shell window to take advantage of your path being modified to include the domino executable.
  • Follow the QuickStart, Steps 3, 4, and 5.
  • Step six of the QuickStart seems to be unnecessary. As the owner of the job, I was set to get email notification anyway.
  • Steps seven and eight of the QuickStart require no elaboration.

BTW, tech support was quick and on point in response to my questions about the installation script.

I have only run the demo scripts at this point but Domino looks like an excellent resource for R users and a great model from bringing the cloud to your desktop.

Leveraging software a user already knows to seamlessly deliver greater capabilities, has to be a winning combination.
.

December 6, 2013

DARPA’s online games crowdsource software security

Filed under: Authoring Topic Maps,Crowd Sourcing,Games,Interface Research/Design — Patrick Durusau @ 8:22 pm

DARPA’s online games crowdsource software security by Kevin McCaney.

From the post:

Flaws in commercial software can cause serious problems if cyberattackers take advantage of them with their increasingly sophisticated bag of tricks. The Defense Advanced Research Projects Agency wants to see if it can speed up discovery of those flaws by making a game of it. Several games, in fact.

DARPA’s Crowd Sourced Formal Verification (CSFV) program has just launched its Verigames portal, which hosts five free online games designed to mimic the formal software verification process traditionally used to look for software bugs.

Verification, both dynamic and static, has proved to be the best way to determine if software free of flaws, but it requires software engineers to perform “mathematical theorem-proving techniques” that can be time-consuming, costly and unable to scale to the size of some of today’s commercial software, according to DARPA. With Verigames, the agency is testing whether untrained (and unpaid) users can verify the integrity of software more quickly and less expensively.

“We’re seeing if we can take really hard math problems and map them onto interesting, attractive puzzle games that online players will solve for fun,” Drew Dean, DARPA program manager, said in announcing the portal launch. “By leveraging players’ intelligence and ingenuity on a broad scale, we hope to reduce security analysts’ workloads and fundamentally improve the availability of formal verification.”

If program verification is possible with online games, I don’t know of any principled reason why topic map authoring should not be possible.

Maybe fill-in-the-blank topic map authoring is just a poor authoring technique for topic maps.

Imagine gamifying data streams to be like Missile Command. 😉

Can you even count the number of hours that you played Missile Command?

Now consider the impact of a topic map authoring interface that addictive.

Particularly if the user didn’t know they were doing useful work.

November 25, 2013

How-to: Index and Search Data with Hue’s Search App

Filed under: Hue,Indexing,Interface Research/Design,Solr — Patrick Durusau @ 4:32 pm

How-to: Index and Search Data with Hue’s Search App

From the post:

You can use Hue and Cloudera Search to build your own integrated Big Data search app.

In a previous post, you learned how to analyze data using Apache Hive via Hue’s Beeswax and Catalog apps. This time, you’ll see how to make Yelp Dataset Challenge data searchable by indexing it and building a customizable UI with the Hue Search app.

Don’t be discouraged by the speed of the presenter in the video.

I suspect he is more than “familiar” with the Hue, Solr and the Yelp dataset. 😉

Like all great “how-to” guides you get a very positive outcome.

A positive outcome with minimal effort may be essential reinforcement for new technologies.

November 23, 2013

Frameless

Filed under: Interface Research/Design,Javascript,SVG,XPath,XSLT — Patrick Durusau @ 2:39 pm

Frameless

From the webpage:

Frameless is an XSLT 2 processor running in the browser, directly written in JavaScript. It includes an XPath 2 query engine for simple, powerful querying. It works cross-browser, we have even reached compatibility with IE6 and Firefox 1.

With Frameless you’ll be able to do things the browsers won’t let you, such as using $variables and adding custom functions to XPath. What’s more, XPath 2 introduces if/else and for-loops. We’ll even let you use some XPath 3 functionality! Combine data into a string using the brand new string concatenation operator.

Use way overdue math functions such as sin() and cos(), essential when generating data-powered SVG graphics. And use Frameless.select() to overcome the boundaries between XSLT and JavaScript.

When to use Frameless?

Frameless is created to simplify application development and is, due to its API, great for writing readable code.

It will make application development a lot easier and it’s a good fit for all CRUD applications and applications with tricky DOM manipulation.

Who will benefit by using it?

  • Designers and managers will be able to read the code and even fix some bugs.
  • Junior developers will get up to speed in no time and write code with a high level of abstraction, and they will be able to create prototypes that’ll be shippable.
  • Senior developers will be able to create complicated webapplications for all browsers and write them declaratively

What it’s not

Frameless doesn’t intend to fully replace functional DOM manipulation libraries like jQuery. If you like you can use such libraries and Frameless at the same time.

Frameless doesn’t provide a solution for cross-browser differences in external CSS stylesheets. We add prefixes to some inline style attributes, but you should not write your styles inline only for this purpose. We do not intend to replace any CSS extension language, such as for example Sass.

Frameless is very sparse on documentation but clearly the potential for browser-based applications is growing.

I first saw this in a tweet by Michael Kay.

November 20, 2013

Middle Earth and Hobbits, A Winning Combination!

Filed under: Graphics,Interface Research/Design,Visualization — Patrick Durusau @ 8:16 pm

Google turns to Middle Earth and Hobbits to show off Chrome’s magic by Kevin C. Tofel.

From the post:

Google has a new Chrome Experiment out in the wild — or the wilds, if you prefer. The latest is a showcase for the newest web technologies packed into Chrome for mobile devices, although it works on traditional computers as well. And what better or richer world to explore on your mobile device is there then J.R.R. Tolkien’s Middle Earth?

Point your Chrome mobile browser to middle-earth.thehobbit.com to explore the Trollshaw Forrest, Rivendell and Dol Guldur with additional locations currently locked. Here’s a glimpse of what to expect:

“It may not feel like it, but this cinematic part of the experience was built with just HTML, CSS, and JavaScript. North Kingdom used the Touch Events API to support multi-touch pinch-to-zoom and the Full Screen API to allow users to hide the URL address bar. It looks natural on any screen size thanks to media queries and feels low-latency because of hardware-accelerated CSS Transitions.”

(Note, I repaired the link to http://middle-earth.thehobbit.com in the post which as posted, simply returned you to the post.)

This project and others like it should have UI coders taking a hard look at browsers.

What are your requirements that can’t be satisfied by a browser interface? (Be sure you understand the notion of sunk costs before answering that question.)

November 5, 2013

The Shelley-Godwin Archive

The Shelley-Godwin Archive

From the homepage:

The Shelley-Godwin Archive will provide the digitized manuscripts of Percy Bysshe Shelley, Mary Wollstonecraft Shelley, William Godwin, and Mary Wollstonecraft, bringing together online for the first time ever the widely dispersed handwritten legacy of this uniquely gifted family of writers. The result of a partnership between the New York Public Library and the Maryland Institute for Technology in the Humanities, in cooperation with Oxford’s Bodleian Library, the S-GA also includes key contributions from the Huntington Library, the British Library, and the Houghton Library. In total, these partner libraries contain over 90% of all known relevant manuscripts.

In case you don’t recognize the name, Mary Shelley wrote Frankenstein; or, The Modern Prometheus; William Godwin, philosopher, early modern (unfortunately theoretical) anarchist; Percy Bysshe Shelley, English Romantic Poet; Mary Wollstonescraft, writer, feminist. Quite a group for the time or even now.

From the About page on Technological Infrastructure:

The technical infrastructure of the Shelley-Godwin Archive builds on linked data principles and emerging standards such as the Shared Canvas data model and the Text Encoding Initiative’s Genetic Editions vocabulary. It is designed to support a participatory platform where scholars, students, and the general public will be able to engage in the curation and annotation of the Archive’s contents.

The Archive’s transcriptions and software applications and libraries are currently published on GitHub, a popular commercial host for projects that use the Git version control system.

  • TEI transcriptions and other data
  • Shared Canvas viewer and search service
  • Shared Canvas manifest generation

All content and code in these repositories is available under open licenses (the Apache License, Version 2.0 and the Creative Commons Attribution license). Please see the licensing information in each individual repository for additional details.

Shared Canvas and Linked Open Data

Shared Canvas is a new data model designed to facilitate the description and presentation of physical artifacts—usually textual—in the emerging linked open data ecosystem. The model is based on the concept of annotation, which it uses both to associate media files with an abstract canvas representing an artifact, and to enable anyone on the web to describe, discuss, and reuse suitably licensed archival materials and digital facsimile editions. By allowing visitors to create connections to secondary scholarship, social media, or even scenes in movies, projects built on Shared Canvas attempt to break down the walls that have traditionally enclosed digital archives and editions.

Linked open data or content is published and licensed so that “anyone is free to use, reuse, and redistribute it—subject only, at most, to the requirement to attribute and/or share-alike,” (from http://opendefinition.org/) with the additional requirement that when an entity such as a person, a place, or thing that has a recognizable identity is referenced in the data, the reference is made using a well-known identifier—called a universal resource identifier, or “URI”—that can be shared between projects. Together, the linking and openness allow conformant sets of data to be combined into new data sets that work together, allowing anyone to publish their own data as an augmentation of an existing published data set without requiring extensive reformulation of the information before it can be used by anyone else.

The Shared Canvas data model was developed within the context of the study of medieval manuscripts to provide a way for all of the representations of a manuscript to co-exist in an openly addressable and shareable form. A relatively well-known example of this is the tenth-century Archimedes Palimpsest. Each of the pages in the palimpsest was imaged using a number of different wavelengths of light to bring out different characteristics of the parchment and ink. For example, some inks are visible under one set of wavelengths while other inks are visible under a different set. Because the original writing and the newer writing in the palimpsest used different inks, the images made using different wavelengths allow the scholar to see each ink without having to consciously ignore the other ink. In some cases, the ink has faded so much that it is no longer visible to the naked eye. The Shared Canvas data model brings together all of these different images of a single page by considering each image to be an annotation about the page instead of a surrogate for the page. The Shared Canvas website has a viewer that demonstrates how the imaging wavelengths can be selected for a page.

One important bit, at least for topic maps, is the view of the Shared Canvas data model that:

each image [is considered] to be an annotation about the page instead of a surrogate for the page.

If I tried to say that or even re-say it, it would be much more obscure. 😉

Whether “annotation about” versus “surrogate for” will catch on beyond manuscript studies it’s hard to say.

Not the way it is usually said in topic maps but if other terminology is better understood, why not?

October 15, 2013

a Google example: preattentive attributes

Filed under: Interface Research/Design,Perception,Visualization — Patrick Durusau @ 4:30 pm

a Google example: preattentive attributes

From the post:

The topic of my short preso at the visual.ly meet up last week in Mountain View was preattentive attributes. I started by discussing exactly what preattentive attributes are (those aspects of a visual that our iconic memory picks up, like color, size, orientation, and placement on page) and how they can be used strategically in data visualization (for more on this, check out my last blog post). Next, I talked through a Google before-and-after example applying the lesson, which I’ll now share with you here.

Preattentive attributes.

Now there is a concept to work into interface/presentation design!

Would its opposite be:

Counter-intuitive attributes?

Are you using “preattentive attributes” in interfaces/presentations or do you rely on what you find intuitive/transparent?

I first saw this cited at Chart Porn.

October 13, 2013

Eurostat regional yearbook 2013 [PDF as Topic Map Interface?]

Filed under: EU,Government,Interface Research/Design,Statistics — Patrick Durusau @ 9:02 pm

Eurostat regional yearbook 2013

From the webpage:

Statistical information is an important tool for understanding and quantifying the impact of political decisions in a specific territory or region. The Eurostat regional yearbook 2013 gives a detailed picture relating to a broad range of statistical topics across the regions of the Member States of the European Union (EU), as well as the regions of EFTA and candidate countries. Each chapter presents statistical information in maps, figures and tables, accompanied by a description of the main findings, data sources and policy context. These regional indicators are presented for the following 11 subjects: economy, population, health, education, the labour market, structural business statistics, tourism, the information society, agriculture, transport, and science, technology and innovation. In addition, four special focus chapters are included in this edition: these look at European cities, the definition of city and metro regions, income and living conditions according to the degree of urbanisation, and rural development.

The Statistical Atlas is an interactive map viewer, which contains statistical maps from the Eurostat regional yearbook and provides the possibility to download these maps as high-resolution PDFs.

PDF version of the Eurostat regional yearbook 2013

But this isn’t a dead PDF file:

Under each table, figure or map in all Eurostat publications you will find hyperlinks with Eurostat online data codes, allowing easy access to the most recent data in Eurobase, Eurostat’s online database. A data code leads to either a two- or three-dimensional table in the TGM (table, graph, map) interface or to an open dataset which generally contains more dimensions and longer time series using the Data Explorer interface (3). In the Eurostat regional yearbook, these online data codes are given as part of the source below each table, figure and map.

In the PDF version of this publication, the reader is led directly to the freshest data when clicking on the hyperlinks for Eurostat online data codes. Readers of the printed version can access the freshest data by typing a standardised hyperlink into a web browser, for example:

http://ec.europa.eu/eurostat/product?code=&mode=view, where is to be replaced by the online data code in question.

A great data collection for anyone interested in the EU.

Take particular note of how delivery in PDF format does not preclude accessing additional information.

I assume that would extend to topic map-based content as well.

Where there is a tradition of delivery of information in a particular form, why would you want to change it?

Or to put it differently, what evidence is there of a pay-off from another form of delivery?

Noting that I don’t consider hyperlinks to be substantively different from other formal references.

Formal references are a staple of useful writing, albeit hyperlinks (can) take less effort to follow.

October 11, 2013

Quick Etymology

Filed under: Interface Research/Design,Language — Patrick Durusau @ 10:34 am

A tweet by Norm Walsh observes:

“Etymology of the word ___” in Google gives a railroad diagram answer on the search results page. Nice.

That along with “define ____” are suggestive of short-cuts for a topic map interface.

Yes?

Thinking of: “Relationships with _____”

Of course, Tiger Woods would be a supernode (…a vertex with a disproportionately high number of incident edges.”). 😉

« Newer PostsOlder Posts »

Powered by WordPress