Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

April 18, 2014

A/B Tests and Facebook

Filed under: A/B Tests,Interface Research/Design,Usability,User Targeting — Patrick Durusau @ 2:32 pm

The reality is most A/B tests fail, and Facebook is here to help by Kaiser Fung.

From the post:

Two years ago, Wired breathlessly extolled the virtues of A/B testing (link). A lot of Web companies are in the forefront of running hundreds or thousands of tests daily. The reality is that most A/B tests fail.

A/B tests fail for many reasons. Typically, business leaders consider a test to have failed when the analysis fails to support their hypothesis. “We ran all these tests varying the color of the buttons, and nothing significant ever surfaced, and it was all a waste of time!” For smaller websites, it may take weeks or even months to collect enough samples to read a test, and so business managers are understandably upset when no action can be taken at its conclusion. It feels like waiting for the train which is running behind schedule.

Bad outcome isn’t the primary reason for A/B test failure. The main ways in which A/B tests fail are:

  1. Bad design (or no design);
  2. Bad execution;
  3. Bad measurement.

These issues are often ignored or dismissed. They may not even be noticed if the engineers running the tests have not taken a proper design of experiments class. However, even though I earned an A at school, it wasn’t until I started running real-world experiments that I really learned the subject. This is an area in which theory and practice are both necessary.

The Facebook Data Science team just launched an open platform for running online experiments, called PlanOut. This looks like a helpful tool to avoid design and execution problems. I highly recommend looking into how to integrate it with your website. An overview is here, and a more technical paper (PDF) is also available. There is a github page.

The rest of this post gets into some technical, sausage-factory stuff, so be warned.

For all of your software tests, do you run any A/B tests on your interface?

Or is your response to UI criticism, “…well, but all of us like it.” That’s a great test for a UI.

If you don’t read any other blog post this weekend, read Kaiser’s take on A/B testing.

May 3, 2013

…The More Things Stay the Same (TECO Line Editor)

Filed under: Interface Research/Design,Usability,User Targeting,Users — Patrick Durusau @ 8:56 am

I just started reading Programming As If People Mattered by Nathaniel Borenstein.

To start chapter 5, Nathaniel relates this story about TECO, an “infamously powerful but hard-to-use line editor…”:

As you probably know, TECO is a line editor in which all of the commands are control characters. To enter some text you would type control-a, followed by the text, and a control-d to end the text. When I was first learning TECO I decided to type in a ten-page paper. I typed control-a, followed by all ten pages of text, followed by the control-d. Unfortunately, as I was typing in the paper I must have hit another control character. So when I typed the final control-d I received the message: ‘Unknown control character–input ignored.’ An hour of typing down the drain.

If that sounds like amusing but ancient history, recall in RSSOwl and Feed Validation a single errant control character in an RSS feed makes RSSOwl refuse the entire feed.

The date of the TECO story isn’t reported but TECO was invented in 1963. (Wikipedia has a nice article, TECO (text editor))

Fifty (50) years later we are still struggling with a sensible responses to errant control characters in data feeds?

Are you filtering non-valid control characters from RSS feeds?

Or are you still “current,” circa 1963?

Who nailed the principles of great UI design?

Filed under: Interface Research/Design,Usability,User Targeting,Users — Patrick Durusau @ 8:30 am

Who nailed the principles of great UI design? Microsoft, that’s who by Andrew C. Oliver.

From the post:

One of the best articles I’ve ever read on user interface design is this 12-year-old classic — written by Microsoft, no less. Published long before smartphones and modern tablets emerged, it fully explains the essence of good UI design. Amazingly, it criticizes Microsoft’s own UIs and explains why they are bad, though it was written at a time when Microsoft was not known for its humility.

Because my company has a mobile application division — and increasingly does full application development in our enterprise open source division — I often have to explain what makes a good or bad UI to customers. I’ve frequently referred to this article by way of explanation.

To give you an idea of my assessment of the “12-year-old classic,” I have saved the page and converted it to PDF for local reading/printing.

It is worth re-reading every month or so if you are interested in user interfaces.

Or should I say if you are interested in successful user interfaces.

Read Andrew’s post as well. It updates us on the continuing releance of IUI (Inductive User Interface) for desktop, web and mobile interfaces.

I first saw this at DZone.

October 18, 2012

Cross-Community? Try Japan, 1980’s, For Success, Today!

Filed under: Interface Research/Design,User Targeting,Users — Patrick Durusau @ 10:37 am

Leveraging the Kano Model for Optimal Results by Jan Moorman.

Jan’s post outlines what you need to know to understand and use a UX model known at the “Kano Model.”

In short, the Kano Model is a way to evaluate how customers (the folks who buy products, not your engineers) feel about product features.

You are ahead of me if you guessed that positive reactions to product features are the goal.

Jan and company returned to the original research. An important point because applying research mechanically will get you mechanical results.

From the post:

You are looking at a list of 18 proposed features for your product. Flat out, 18 are too many to include in the initial release given your deadlines, and you want identify the optimal subset of these features.

You suspect an executive’s teenager suggested a few. Others you recognize from competitor products. Your gut instinct tells you that none of the 18 features are game changers and you’re getting pushback on investing in upfront generative research.

It’s a problem. What do you do?

You might try what many agile teams and UX professionals are doing: applying a method that first emerged in Japan during the 1980’s called the ‘Kano Model’ used to measures customer emotional reaction to individual features. At projekt202, we’ve had great success in doing just that. Our success emerged from revisiting Kano’s original research and through trial and error. What we discovered is that it really matters how you design and perform a Kano study. It matters how you analyze and visualize the results.

We have also seen how the Kano Model is a powerful tool for communicating the ROI of upfront generative research, and how results from Kano studies inform product roadmap decisions. Overall, Kano studies are a very useful to have in our research toolkit.

Definitely an approach to incorporate in UX evaluation.

March 10, 2012

Ad targeting at Yahoo

Filed under: Ad Targeting,Marketing,User Targeting — Patrick Durusau @ 8:20 pm

Ad targeting at Yahoo by Greg Linden.

From the post:

A remarkably detailed paper, “Web-Scale User Modeling for Targeting” (PDF), will be presented at WWW 2012 that gives many insights into how Yahoo does personalized advertising.

In summary, the researchers describe a system used in production at Yahoo that does daily builds of large user profiles. Each profile contains tens of thousands of features that summarize the interests of each user from the web pages they have viewed, searches they made, and ads they have viewed, clicked on, and converted (bought something) on. They explain how important it is to use conversions, not just ad clicks, to train the system. They measure the importance of using recent history (what you did in the last couple days), of using fine-grained data (detailed categories and even some specific pages and queries), of using large profiles, and of including data about ad views (which is a huge and low quality data source since there are multiple ad views per page view), and find all those significantly help performance.

You need to read the paper and Greg’s analysis (+ additional references) if you are interested in user profiles/marketing.

Even if you are not, I think the paper offers a window into one view of user behavior. Whether that view works for you, your ad clients or topic map applications, is another question.

Powered by WordPress