Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

October 14, 2014

How designers prototype at GDS

Filed under: Design,Interface Research/Design — Patrick Durusau @ 10:53 am

How designers prototype at GDS by Rebecca Cottrell.

From the post:

All of the designers at GDS can code or are learning to code. If you’re a designer who has used prototyping tools like Axure for a large part of your professional career, the idea of prototyping in code might be intimidating. Terrifying, even.

I’m a good example of that. When I joined GDS I felt intimidated by the idea of using Terminal and things like Git and GitHub, and just the perceived slowness of coding in HTML.

At first I felt my workflow had slowed down significantly, but the reason for that was the learning curve involved – I soon adapted and got much faster.

GDS has lots of tools (design patterns, code snippets, front-end toolkit) to speed things up. Sharing what I learned in the process felt like a good idea to help new designers get to grips with how we work.

Not a rigid set of prescriptions but experience at prototyping and pointers to other resources. Whether you have a current system of prototyping or not, you are very likely to gain a tip or two from this post.

I first saw this in a tweet by Ben Terrett.

August 23, 2014

Data + Design

Filed under: Data,Design,Survey,Visualization — Patrick Durusau @ 2:17 pm

Data + Design: A simple introduction to preparing and visualizing information by Trina Chiasson, Dyanna Gregory and others.

From the webpage:

ABOUT

Information design is about understanding data.

Whether you’re writing an article for your newspaper, showing the results of a campaign, introducing your academic research, illustrating your team’s performance metrics, or shedding light on civic issues, you need to know how to present your data so that other people can understand it.

Regardless of what tools you use to collect data and build visualizations, as an author you need to make decisions around your subjects and datasets in order to tell a good story. And for that, you need to understand key topics in collecting, cleaning, and visualizing data.

This free, Creative Commons-licensed e-book explains important data concepts in simple language. Think of it as an in-depth data FAQ for graphic designers, content producers, and less-technical folks who want some extra help knowing where to begin, and what to watch out for when visualizing information.

As of today, the Data + Design is the product of fifty (50) volunteers from fourteen (14) countries. At eighteen (18) chapters and just shy of three-hundred (300) pages, this is a solid introduction to data and its visualization.

The source code is on GitHub, along with information on how you can contribute to this project.

A great starting place but my social science background is responsible for my caution concerning chapters 3 and 4 on survey design and questions.

All of the information and advice in those chapters is good, but it leaves the impression that you (the reader) can design an effective survey instrument. There is a big difference between an “effective” survey instrument and a series of questions pretending to be a survey instrument. Both will measure “something” but the question is whether a survey instrument provides you will actionable intelligence.

For a survey on any remotely mission critical, like user feedback on an interface or service, get as much professional help as you can afford.

When was the last time you heard of a candidate for political office or serious vendor using Survey Monkey? There’s a reason for that lack of reports. Can you guess that reason?

I first saw this in a tweet by Meta Brown.

July 19, 2014

…Ad-hoc Contextual Inquiry

Filed under: Design,Interface Research/Design,UX — Patrick Durusau @ 6:24 pm

Honing Your Research Skills Through Ad-hoc Contextual Inquiry by Will Hacker.

From the post:

It’s common in our field to hear that we don’t get enough time to regularly practice all the types of research available to us, and that’s often true, given tight project deadlines and limited resources. But one form of user research–contextual inquiry–can be practiced regularly just by watching people use the things around them and asking a few questions.

I started thinking about this after a recent experience returning a rental car to a national brand at the Phoenix, Arizona, airport.

My experience was something like this: I pulled into the appropriate lane and an attendant came up to get the rental papers and send me on my way. But, as soon as he started, someone farther up the lane called loudly to him saying he’d been waiting longer. The attendant looked at me, said “sorry,” and ran ahead to attend to the other customer.

A few seconds later a second attendant came up, took my papers, and jumped into the car to check it in. She was using an app on an tablet that was attached to a large case with a battery pack, which she carried over her shoulder. She started quickly tapping buttons, but I noticed she kept navigating back to the previous screen to tap another button.

Curious being that I am, I asked her if she had to go back and forth like that a lot. She said “yes, I keep hitting the wrong thing and have to go back.”

Will expands his story into why and how to explore random user interactions with technology.

If you want to become better at contextual inquiry and observation, Will has the agenda for you.

He concludes:

Although exercises like this won’t tell us the things we’d like to know about the products we work on, they do let us practice the techniques of contextual inquiry and observation and make us more sensitive to various design issues. These experiences may also help us build the case in more companies for scheduling time and resources for in-field research with our actual customers.

Government Software Design Questions

Filed under: Design,Interface Research/Design,Use Cases,UX — Patrick Durusau @ 3:49 pm

10 questions to ask when reviewing design work by Ben Terrett.

Ben and a colleague reduced a list of design review questions by Jason Fried down to ten:

10 questions to ask when reviewing design work

1. What is the user need?

2. Is what it says and what it means the same thing?

3. What’s the take away after 3 seconds? (We thought 8 seconds was a bit long.)

4. Who needs to know that?

5. What does someone know now that they didn’t know before?

6. Why is that worth a click?

7. Are we assuming too much?

8. Why that order?

9. What would happen if we got rid of that?

10. How can we make this more obvious?

 

I’m Ben, Director of Design at GDS. You can follow me on twitter @benterrett

A great list for reviewing any design!

Where design doesn’t just mean an interface but presentation of data as well.

I am now following @benterrett and you should too.

It is a healthy reminder that not everyone in government wants to harm their own citizens and others. A minority do but let’s not forget true public servants while opposing tyrants.

I first saw the ten questions post in Nat Torkington’s Four short links: 18 July 2014.

June 1, 2014

More Science for Computer Science

Filed under: Computer Science,Design,Science,UML — Patrick Durusau @ 6:44 pm

In Debunking Linus’s Law with Science I pointed you to a presentation by Felienne Hermans outlining why the adage:

given enough eyeballs, all bugs are shallow

is not only false but the exact opposite is in fact true. The more people who participate in development of software, the more bugs it will contain.

Remarkably, I have found another instance of the scientific method being applied to computer science.

The abstract for On the use of software design models in software development practice: an empirical investigation by Tony Gorschek, Ewan Tempero, and, Lefteris Angelis, reads as follows:

Research into software design models in general, and into the UML in particular, focuses on answering the question how design models are used, completely ignoring the question if they are used. There is an assumption in the literature that the UML is the de facto standard, and that use of design models has had a profound and substantial effect on how software is designed by virtue of models giving the ability to do model-checking, code generation, or automated test generation. However for this assumption to be true, there has to be significant use of design models in practice by developers.

This paper presents the results of a survey summarizing the answers of 3785 developers answering the simple question on the extent to which design models are used before coding. We relate their use of models with (i) total years of programming experience, (ii) open or closed development, (iii) educational level, (iv) programming language used, and (v) development type.

The answer to our question was that design models are not used very extensively in industry, and where they are used, the use is informal and without tool support, and the notation is often not UML. The use of models decreased with an increase in experience and increased with higher level of qualification. Overall we found that models are used primarily as a communication and collaboration mechanism where there is a need to solve problems and/or get a joint understanding of the overall design in a group. We also conclude that models are seldom updated after initially created and are usually drawn on a whiteboard or on paper.

I plan on citing this paper the next time someone claims that UML diagrams will be useful for readers of a standard.

If you are interested in fact correction issues at Wikipedia, you might want to suggest that in the article on UML the statement:

UML has been found useful in many design contexts,[5] so much so that is has become ubiquitous in its field.

At least the second half of it, “so much so that is has become ubiquitous in its field,” appears to be false.

Do you know of any other uses of science with regard to computer science?

I first saw this in a twee by Erik Meijer

May 21, 2014

How we built interactive heatmaps…

Filed under: Design,Heatmaps,Interface Research/Design — Patrick Durusau @ 2:22 pm

How we built interactive heatmaps using Solr and Heatmap.js by Chris Becker.

From the post:

One of the things we obsess over at Shutterstock is the customer experience. We’re always aiming to better understand how customers interact with our site in their day to day work. One crucial piece of information we wanted to know was which elements of our site customers were engaging with the most. Although we could get that by running a one-off report, we wanted to be able to dig into that data for different segments of customers based on their language, country, purchase decisions, or a/b test variations they were viewing in various periods of time.

To do this we built an interactive heatmap tool to easily show us where the “hot” and “cold” parts of our pages were — where customers clicked the most, and where they clicked the least. The tool we built overlaid this heatmap on top of the live site, so we could see the site the way users saw it, and understand where most of our customer’s clicks took place. Since customers are viewing our site in many different screen resolutions we wanted the heatmap tool to also account for the dynamic nature of web layouts and show us heatmaps for any size viewport that our site is used in.

If you are offering a web interface to topic map (or other information services) this is a great way to capture user feedback on your UI.

PS: shutterstock-heatmap-toolkit (GitHub)

April 24, 2014

Tools for ideation and problem solving: Part 1

Filed under: Design,Ideation,Problem Solving — Patrick Durusau @ 9:05 am

Tools for ideation and problem solving: Part 1 by Dan Lockton.

From the post:

Back in the darkest days of my PhD, I started blogging extracts from the thesis as it was being written, particularly the literature review. It helped keep me motivated when I was at a very low point, and seemed to be of interest to readers who were unlikely to read the whole 300-page PDF or indeed the publications. Possibly because of the amount of useful terms in the text making them very Google-able, these remain extremely popular posts on this blog. So I thought I would continue, not quite where I left off, but with a few extracts that might actually be of practical use to people working on design, new ideas, and understanding people’s behaviour.

The first article (to be split over two parts) is about toolkits (and similar things, starting with an exploration of idea generation methods), prompted by much recent interest in the subject via projects such as Lucy Kimbell, Guy Julier, Jocelyn Bailey and Leah Armstrong’s Mapping Social Design Research & Practice and Nesta’s Development Impact & You toolkit, and some of our discussions at the Helen Hamlyn Centre for the Creative Citizens project about different formats for summarising information effectively. (On this last point, I should mention the Sustainable Cultures Engagement Toolkit developed in 2012-13 by my colleagues Catherine Greene and Lottie Crumbleholme, with Johnson Controls, which is now available online (12.5MB PDF).)

The article below is not intended to be a comprehensive review of the field, but was focused specifically on aspects which I felt were relevant for a ‘design for behaviour change’ toolkit, which became Design with Intent. I should also note that since the below was written, mostly in 2010-11, a number of very useful articles have collected together toolkits, card decks and similar things. I recommend: Venessa Miemis’s 21 Card Decks, Hanna Zoon’s Depository of Design Toolboxes, Joanna Choukeir’s Design Methods Resources, Stephen Anderson’s answer on this Quora thread, and Ola Möller’s 40 Decks of Method Cards for Creativity. I’m sure there are others.

Great post but best read when you have time to follow links and to muse about what you are reading.

I think the bicycle with square wheels was the best example in part 1. Which example do you like best? (Yes, I am teasing you into reading the post.)

Having a variety of problem solving/design skills will enable you to work with groups that respond to different problem solving strategies.

Important in eliciting designs for topic maps as users don’t ever talk about implied semantics known by everyone.

Unfortunately, our machines not being people, don’t know what everyone else knows, they know only what they are told.

I first saw this in Nat Torkington’s Four short links: 23 April 2014.

December 24, 2013

Design, Math, and Data

Filed under: Dashboard,Data,Design,Interface Research/Design — Patrick Durusau @ 2:58 pm

Design, Math, and Data: Lessons from the design community for developing data-driven applications by Dean Malmgren.

From the post:

When you hear someone say, “that is a nice infographic” or “check out this sweet dashboard,” many people infer that they are “well-designed.” Creating accessible (or for the cynical, “pretty”) content is only part of what makes good design powerful. The design process is geared toward solving specific problems. This process has been formalized in many ways (e.g., IDEO’s Human Centered Design, Marc Hassenzahl’s User Experience Design, or Braden Kowitz’s Story-Centered Design), but the basic idea is that you have to explore the breadth of the possible before you can isolate truly innovative ideas. We, at Datascope Analytics, argue that the same is true of designing effective data science tools, dashboards, engines, etc — in order to design effective dashboards, you must know what is possible.

As founders of Datascope Analytics, we have taken inspiration from Julio Ottino’s Whole Brain Thinking, learned from Stanford’s d.school, and even participated in an externship swap with IDEO to learn how the design process can be adapted to the particular challenges of data science (see interspersed images throughout).

If you fear “some assembly required,” imagine how users feel with new interfaces.

Good advice on how to explore potential interface options.

Do you think HTML5 will lead to faster mock-ups?

See for example:

21 Fresh Examples of Websites Using HTML5 (2013)

40+ Useful HTML5 Examples and Tutorials (2012)

HTML5 Website Showcase: 48 Potential Flash-Killing Demos (2009, est.)

November 27, 2013

Data Quality, Feature Engineering, GraphBuilder

Filed under: Data Quality,Design,ETL,GraphBuilder,Pig — Patrick Durusau @ 3:06 pm

Avoiding Cluster-Scale Headaches with Better Tools for Data Quality and Feature Engineering by Ted Willke.

Ted’s second slide reads:

Machine Learning may nourish the soul…

…but Data Preparation will consume it.

Ted starts off talking about the problems of data preparation but fairly quickly focuses in on property graphs and using Pig ETL.

He also outlines outstanding problems with Pig ETL (slides 29-32).

Nothing surprising but good news that Graph Builder 2 Alpha is due out in Dec’ 13.

BTW, GraphBuilder 1.0 can be found at: https://01.org/graphbuilder/

November 26, 2013

The curse of NOARK

Filed under: Archives,Design,Standards — Patrick Durusau @ 10:18 am

The curse of NOARK by Lars Marius Garshol.

From the post:

I’m writing about a phenomenon that’s specifically Norwegian, but some things are easier to explain to foreigners, because we Norwegians have been conditioned to accept them. In this case I’m referring to the state of the art for archiving software in the Norwegian public sector, where everything revolves around the standard known as NOARK.

Let’s start with the beginning. Scandinavian chancelleries have a centuries-long tradition for a specific approach to archiving, which could be described as a kind of correspondence journal. Essentially, all incoming and outgoing mail, as well as important internal documents, were logged in a journal, with title, from, to, and date for each document. In addition, each document would be filed under a “sak”, which translates roughly as “case” or “matter under consideration”. Effectively, it’s a kind of tag which ties together one thread of documents relating to a specific matter.

The classic example is if the government receives a request of some sort, then produces some intermediate documents while processing it, and then sends a response. Perhaps there may even be couple of rounds of back-and-forth with the external party. This would be an archetypal “sak” (from now on referred to as “case”), and you can see how having all these documents in a single case file would be absolutely necessary for anyone responding to the case. In fact, it’s not dissimilar to the concept of an issue in an issue-tracking system.

In this post and its continuation in Archive web services: a missed opportunity Lars details the shortcomings of the NOARK standard.

To some degree specifically Norwegian but the problem of poor IT design is truly an international phenomena.

I haven’t made any suggestions since the U.S. is home to the virtual case management debacle, the incredible melting NSA data center, not to mention the non-functional health care IT system known as HeathCare.gov.

Read these posts by Lars because you will encounter projects before mistakes similar to the ones Lars describes have been set in stone.

No guarantees of success but instead of providing semantic data management on top of a broken IT system, you could be providing semantic data management on top of a non-broken IT system.

Perhaps never a great IT system but I would settle for a non-broken one any day.

November 21, 2013

Set The WayBack Machine for 1978 – Destination: Unix

Filed under: Design,Programming — Patrick Durusau @ 3:58 pm

Bell System Technical Journal, v57: i6 July-August 1978

Where you will find:

Before you build another software monolith, you should spend some time reading this set of Unix classics.

Before you object to the age of the materials, can you name another OS that is forty (40)+ years old? (And still popular. I don’t count legacy systems in the basement of the SSA. 😉 )

Perhaps there’s something to the “small tool” mentality of Unix.

I first saw this in a tweet by CompSciFact.

November 7, 2013

Learning 30 Technologies in 30 Days…

Filed under: Design,Javascript,OpenShift,Programming — Patrick Durusau @ 9:32 am

Learning 30 Technologies in 30 Days: A Developer Challenge by Shekhar Gulati.

From the post:

I have taken a challenge wherein I will learn a new technology every day for a month. The challenge started on October 29, 2013. Below is the list of technologies I’ve started learnign and blogging about. After my usual work day, I will spend a couple of hours learning a new technology and one hour writing about it. The goal of this activity is to get familiar with many of the new technologies being used in the developer community. My main focus is on JavaScript and related technologies. I’ll also explore other technologies that interest me like Java, for example. I may spend multiple days on the same technology, but I will pick a new topic each time within that technology. Wherever it makes sense, I will try to show how it can work with OpenShift. I am expecting it to be fun and a great learning experience.

The homepage of the challenge that currently points to:

  1. October 29, 2013 – Day 1: Bower—Manage Your Client Side Dependencies. The first day talks about Bower and how you can use it.

  2. October 30, 2013 – Day 2: AngularJS—Getting My Head Around AngularJS. This blog talks about how you can get started with AngularJS. It is a very basic blog and talks about how to build a simple bookshop application.

  3. October 31, 2013 – Day 3: Flask—Instant Python Web Development with Python and OpenShift. This blog introduces Flask–a micro framework for doing web development in Python. It also reviews “Instant Flask Web Development” book and port the sample application to OpenShift.

  4. November 1, 2013 – Day 4: PredictionIO—How to A Build Blog Recommender. This blog talks about how you can use PredictionIO to build a blog recommender.

  5. November 2, 2013 — Day 5: GruntJS—Let Someone Else Do My Tedious Repetitive Tasks. This blog talks about how we can let GruntJS perform tedious tasks on our behalf. It also covers how we can use grunt-markdown plugin to convert Markdown to HTML5.

  6. November 3, 2013 — Day 6: Grails–Rapid JVM Web Development with Grails And OpenShift. This blog talks about how we can use Grails to build web application. Then we will deploy the application to OpenShift.

  7. November 4, 2013 – Day 7: GruntJS LiveReload–Take Productivity To Another Level. This blog talks about how we can use GruntJS watch plugin and live reload functionality to achieve extreme productivity.

  8. November 5, 2013 – Day 8: Harp–The Modern Static Web Server. This blog post will discuss the Harp web server and how to install and use it

  9. November 6, 2103 – Day 9: TextBlob–Finding Sentiments in Text

I encountered the challenge via the Day 4: PredictionIO—How to A Build Blog Recommender post.

The more technologies you know the broader your options for creation and delivery of topic map content to users.

September 2, 2013

Defining Usability

Filed under: Design,Interface Research/Design,Usability — Patrick Durusau @ 7:39 pm

Over the Labor Day holiday weekend (U.S.) i had a house full of librarians.

That happens when you are married to a librarian, who has a first cousin who is a librarian and your child is also a librarian.

It’s no surprise they talked about library issues and information technology issues in libraries in particular.

One primary concern was how to define “usability” for a systems engineer.

Patrons could “request” items and would be assured that they request had been accepted. However, the “receiver” module for that message, used by circulation, had no way to retrieve the requests.

From a systems perspective, the system was accepting requests, as designed. While circulation (who fulfills the requests) could not retrieve the messages, that was also part of the system design.

The user’s expectation their request would be seen and acted was being disappointed.

Disappointment of a user expectation, even if within system design parameters, is by definition, failure of the UI.

The IT expectation users would, after enough silence, make in-person or phone requests was the one that should be disappointed.

Or to put it another way, IT systems do not exist to provide employment for people interested in IT.

They exist solely and proximity to assist users in tasks that may have very little to do with IT.

Users are interested in “real life” (a counter-part to “real world”) research, discovery, publication, invention, business, pleasure and social interaction.

August 8, 2013

Bret Victor – The Future of Programming

Filed under: Computer Science,Design,Programming — Patrick Durusau @ 1:32 pm

I won’t try to describe or summarize Bret’s presentation for fear I will spoil it for you.

I can say that if you aspire to make a difference in computer science, large or small, this is a video for you.

There are further materials at Bret’s website: http://worrydream.com/

How to Design Programs, Second Edition

Filed under: Computer Science,Design,Programming — Patrick Durusau @ 12:44 pm

How to Design Programs, Second Edition by Matthias Felleisen, Robert Bruce Findler, Matthew Flatt, Shriram Krishnamurthi.

From the webpage:

Bad programming is easy. Idiots can learn it in 21 days, even if they are Dummies.

Good programming requires thought, but everyone can do it and everyone can experience the satisfaction that comes with it. The price is worth paying for the sheer joy of the discovery process, the elegance of the result, and the commercial benefits of a systematic program design process.

The goal of our book is to introduce readers of all ages and backgrounds to the craft of designing programs systematically. We assume few prerequisites: arithmetic, a tiny bit of middle school algebra, and the willingness to think through issues. We promise that the travails will pay off not just for future programmers but for anyone who has to follow a process or create one for others.

We are grateful to Ada Brunstein, our editor at MIT Press, who gave us permission to develop this second edition of “How to Design Programs” on-line.

Good to see this “classic” being revised online.

Differences: This second edition of “How to Design Programs” continues to present an introduction to systematic program design and problem solving. Here are some important differences:

  1. The recipes are applied in two different, typical settings: interactive graphical programs and so-called “batch” programs. The former mode of interaction is typical for games, the latter for data processing in business centers. Both kinds of programs are still created with our design recipes.

  2. While testing has always been a part of the “How to Design Programs” philosophy, the software started supporting it properly only in 2002, just after we had released the first edition. This new edition heavily relies on this testing support now.
  3. This edition of the book drops the design of imperative programs. The old chapters remain available on-line. The material will flow into the next volume of the book, “How to Design Components.”
  4. The book and its programs employ new libraries, also known as teachpacks. The preferred style is to link in these libraries via so-called require specifications, but it is still possible to add teachpacks via a menu in DrRacket.
  5. Finally, we decided to use a slightly different terminology:

    HtDP/1e

    HtDP/2e

    contract

    signature

    union

    itemization

Any other foundation texts that have abandoned imperative programming?

I first saw this in Nat Torkington’s Four short links: 5 August 2013.

August 3, 2013

Information Dashboard Design…

Filed under: Dashboard,Design,Interface Research/Design — Patrick Durusau @ 4:16 pm

Information Dashboard Design: Displaying Data for At-a-Glance Monitoring by Stephen Few.

The Amazon description:

A leader in the field of data visualization, Stephen Few exposes the common problems in dashboard design and describes its best practices in great detail and with a multitude of examples in this updated second edition. According to the author, dashboards have become a popular means to present critical information at a glance, yet few do so effectively. He purports that when designed well, dashboards engage the power of visual perception to communicate a dense collection of information efficiently and with exceptional clarity and that visual design skills that address the unique challenges of dashboards are not intuitive but rather learned. The book not only teaches how to design dashboards but also gives a deep understanding of the concepts—rooted in brain science—that explain the why behind the how. This revised edition offers six new chapters with sections that focus on fundamental considerations while assessing requirements, in-depth instruction in the design of bullet graphs and sparklines, and critical steps to follow during the design process. Examples of graphics and dashboards have been updated throughout, including additional samples of well-designed dashboards.

Disclosure: I follow Stephen’s blog but I have not seen either edition of this book.

However, if you want to send me a copy, I will post a review of it. 😉

Or point me to other reviews and I will update this post with pointers.

April 30, 2013

Patterns of information use and exchange:…

Filed under: Design,Interface Research/Design,Marketing,Usability,Users — Patrick Durusau @ 3:05 pm

Patterns of information use and exchange: case studies of researchers in the life sciences

From the post:

A report of research patterns in life sciences revealing that researcher practices diverge from policies promoted by funders and information service providers

This report by the RIN and the British Library provides  a unique insight into how information is used by researchers across life sciences. Undertaken by the University of Edinburgh’s Institute for the Study of Science, Technology and Innovation, and the UK Digital Curation Centre and the University of Edinburgh?s Information Services, the report concludes that one-size-fits-all information and data sharing policies are not achieving scientifically productive and cost-efficient information use in life sciences.

The report was developed using an innovative approach to capture the day-to-day patterns of information use in seven research teams from a wide range of disciplines, from botany to clinical neuroscience. The study undertaken over 11 months and involving 56 participants found that there is a significant gap between how researchers behave and the policies and strategies of funders and service providers. This suggests that the attempts to implement such strategies have had only a limited impact. Key findings from the report include:

  • Researchers use informal and trusted sources of advice from colleagues, rather than institutional service teams, to help identify information sources and resources
  • The use of social networking tools for scientific research purposes is far more limited than expected
  • Data and information sharing activities are mainly driven by needs and benefits perceived as most important by life scientists rather than top-down policies and strategies
  • There are marked differences in the patterns of information use and exchange between research groups active in different areas of the life sciences, reinforcing the need to avoid standardised policy approaches

Not the most recent research in the area but a good reminder that users do as users do, not as system/software/ontology architects would have them do.

What approach does your software take?

Does it make users perform their tasks the “right” way?

Or does it help users do their tasks “their” way?

April 29, 2013

Atlas of Design

Filed under: Design,Graphics,Interface Research/Design,Mapping,Maps,Visualization — Patrick Durusau @ 2:01 pm

Atlas of Design by Caitlin Dempsey.

From the post:

Do you love beautiful maps? The Atlas of Design has been reprinted and is now available for purchase. Published by the North American Cartographic Information Society (NACIS), this compendium showcases cartography at some of its finest. The atlas was originally published in 2012 and features the work of 27 cartographers. In early 2012, a call for contributions was sent out and 140 entries from 90 different individuals and groups submitted their work. A panel of eight volunteer judges plus the book’s editors evaluated the entries and selected the finalists.

The focus of the Atlas of Design is on the aesthetics and design involved in mapmaking. Tim Wallace and Daniel Huffman, the editors of Atlas of Design explain the book’s introduction about the focus of the book:

Aesthetics separate workable maps from elegant ones.

This book is about the latter category.

My personal suspicion is that aesthetics separate legible topic maps from those that attract repeat users.

The only way to teach aesthetics (which varies by culture and social group) is by experience.

This is a great starting point for your aesthetics education.

April 26, 2013

Bad Practices

Filed under: Design,Interface Research/Design,Programming — Patrick Durusau @ 2:39 pm

Why Most People Don’t Follow Best Practices by Kendra Little.

Posted in a MS SQL Server context but the lesson applies to software, systems, and processes alike:

Unfortunately, human nature makes people persist all sorts of bad practices. I find everything in the wild from weekly reboots to crazy settings in Windows and SQL Server that damage performance and can cause outages. When I ask why the settings are in place, I usually hear a story that goes like this:

  • Once upon a time, in a land far far away there was a problem
  • The people of the land were very unhappy
  • A bunch of changes were made
  • Some of the changes were recommended by someone on the internet. We think.
  • The problem went away
  • The people of the land were happier
  • We hunkered down and just hoped the problem would never come back
  • The people of the land have been growing more and more unhappy over time again

Most of the time “best practices” are implemented to try and avoid pain rather than to configure things well. And most of the time they aren’t thought out in terms of long term performance. Most people haven’t really implemented any best practices, they’ve just reacted to situations.

How are the people of the land near you?

April 15, 2013

The Power of Collaboration [Cultural Gulfs]

Filed under: Collaboration,Design — Patrick Durusau @ 8:57 am

The Power of Collaboration by Andrea Ruskin.

From the post:

A quote that I stumbled on during grad school stuck with me. From the story of the elder’s box as told by Eber Hampton, it sums up my philosophy of working and teaching:

How many sides do you see?
One,” I said.
He pulled the box towards his chest and turned it so one corner faced me.
Now how many do you see?
Now I see three sides.
He stepped back and extended the box, one corner towards him and one towards me.
You and I together can see six sides of this box,” he told me.

—Eber Hampton (2002) The Circle Unfolds, p. 41–42

Andrea describes a graduate school project to develop a learning resource for Aboriginal students.

A task made more difficult by Andrea being a non-Aboriginal designer.

The gap between you and a topic map customer may not be as obvious but will be no less real.

April 5, 2013

Successful PROV Tutorial at EDBT

Filed under: Design,Modeling,Provenance — Patrick Durusau @ 1:13 pm

Successful PROV Tutorial at EDBT by Paul Groth.

From the post:

On March 20th, 2013 members of the Provenance Working Group gave a tutorial on the PROV family of specifications at the EDBT conference in Genova, Italy. EDBT (“Extending Database Technology”) is widely regarded as one of the prime venues in Europe for dissemination of data management research.

The 1.5 hours tutorial was attended by about 26 participants, mostly from academia. It was structured into three parts of approximately the same length. The first two parts introduced PROV as a relational data model with constraints and inference rules, supported by a (nearly) relational notation (PROV-N). The third part presented known extensions and applications of PROV, based on the extensive PROV implementation report and implementations known to the presenter at the time.

All the presentation material is available here.

As the first part of the tutorial notes:

  • Provenance is not a new subject
    • workflow systems
    • databases
    • knowledge representation
    • information retrieval
  • Existing community-grown vocabularies
    • Open Provenance Model (OPM)
    • Dublin Core
    • Provenir ontology
    • Provenance vocabulary
    • SWAN provenance ontology
    • etc.

The existence of “other” vocabularies isn’t an issue for topic maps.

You can query on “your” vocabulary and obtain results from “other” vocabularies.

Enriches your information and that of others.

You will need to know about the vocabularies of others and their oddities.

For the W3C work on provenance, follow this tutorial and the others it mentions.

April 2, 2013

Topic Map Patterns/Use Cases

Filed under: Design,Design Patterns,Graphics,UML,Visualization — Patrick Durusau @ 3:18 pm

The sources for topic map patterns I mentioned yesterday use a variety of modeling languages:

Data Model Patterns: Conventions of Thought by David C. Hay. (Uses CASE*Method™ (Baker’s Notation))

Domain-Driven Design: Tackling Complexity in the Heart of Software by Eric Evans. (Uses UML (Unified Modeling Language))

Developing High Quality Data Models by Matthew West. (Uses EXPRESS (EXPRESS-G is for information models))

The TMDM and Kal’s Design Patterns both use UML notation.

Although constraints will be expressed in TMCL, visually it looks to me like UML should be the notation of choice.

Will require transposition from non-UML notation but seems worthwhile to have a uniform notation.

Any strong reasons to use another notation?

April 1, 2013

Design Pattern Sources?

Filed under: Design,Design Patterns,Modeling — Patrick Durusau @ 2:23 pm

To continue with the need for topic map design pattern thread, what sources would you suggest for design patterns?

Thinking that it would be more efficient to start from commonly known patterns and then when necessary, to branch out into new or unique ones.

Not to mention that starting with familiar patterns, as opposed to esoteric ones, will provide some comfort level for users.

Sources that I have found useful include:

Data Model Patterns: Conventions of Thought by David C. Hay.

Domain-Driven Design: Tackling Complexity in the Heart of Software by Eric Evans.

Developing High Quality Data Models by Matthew West. (Think Shell Oil. Serious enterprise context.)

Do you have any favorites you would suggest?

After a day or two of favorites, the next logical step would be to choose a design pattern and with an eye on Kal’s Design Pattern Examples , attempt to fashion a design template.

Just one, not bother to specify what comes next.

Working one bite at a time will make the task seem manageable.

Yes?

Topic Map Design Patterns For Information Architecture

Filed under: Design,Design Patterns,Modeling,TMCL — Patrick Durusau @ 1:21 pm

Topic Map Design Patterns For Information Architecture by Kal Ahmed.

Abstract:

Software design patterns give programmers a high level language for discussing the design of software applications. For topic maps to achieve widespread adoption and improved interoperability, a set of topic map design patterns are needed to codify existing practices and make them available to a wider audience. Combining structured descriptions of design patterns with Published Subject Identifiers would enable not only the reuse of design approaches but also encourage the use of common sets of PSIs. This paper presents the arguments for developing and publishing topic map design patterns and a proposed notation for diagramming design patterns based on UML. Finally, by way of examples, the paper presents some design patterns for representation of traditional classification schemes such as thesauri, hierarchical and faceted classification.

Kal used UML to model the design patterns and their constraints. (TMCL, the Topic Map Constraint Language, had yet to be written. (TMCL)

For visual modeling purposes, are there any constraints in TMCL that cannot be modeled in UML?

I ask because I have not compared TMCL to UML.

Using UML to express the generic constraints in TMCL would be a first step towards answering the need for topic maps design patterns.

Topic Map Design Patterns

Filed under: Design,Design Patterns,Modeling — Patrick Durusau @ 12:47 pm

A recent comment on topic map design patterns reads in part:

The second problem, and the one I’m working through now, is that information modeling with topic maps is a new paradigm for me (and most people I’m sure) and the information on topic map models is widely dispersed. Techquila had some design patterns that were very useful and later those were put put in a paper by A. Kal but, in general, it is a lot more difficult to figure out the information model with topic maps than it is with SQL or NoSQL or RDF because those other technologies have a lot more open discussions of designs to cover specific use cases. If those discussions existed for topic maps, it would make it easier for non-experts like me to connect the high-level this-is-how-topic-maps-work type information (that is plentiful) with the this-is-the-problem-and-this-is-the-model-that-solves-it type information (that is hard to find for topic maps).

Specifically, the problem I’m trying to solve and many other real world problems need a semi-structured information model, not just an amorphous blob of topics and associations. There are multiple dimensions of hierarchies and sequences that need to be modeled so that the end user can query the system with OLAP type queries where they drill up and down or pan forward and back through the information until they find what they need.

Do you know of any books of Topic Maps use cases and/or design patterns?

Unfortunately I had to say that I knew of no “Topic Maps use cases and/or design patterns” books.

There is XML topic maps : creating and using topic maps for the Web by Sam Hunting and Jack Park, but it isn’t what I would call a design pattern book.

While searching for the Hunting/Park book I did find: Topic Maps: Semantische Suche im Internet (Xpert.press) (German Edition) [Paperback] by Richard Widhalm (Author), Thomas Mück, with a 2012 publication date. Don’t be deceived. This is a reprint of the 2002 edition.

Any books that I have missed on topic maps modeling in particular?

The comment identifies a serious lack of resources on use cases and design patterns for topic maps.

My suggestion is that we all refresh our memories of Kal’s work on topic map design patterns (which I will cover in a separate post) and start to correct this deficiency.

What say you all?

March 29, 2013

Writing Effective Requirement Documents – An Overview

Filed under: Design,Interface Research/Design,Requirements,Use Cases — Patrick Durusau @ 5:08 pm

Writing Effective Requirement Documents – An Overview

From the post:

In every UX Design project, the most important part is the requirements gathering process. This is an overview of some of the possible methods of requirements gathering.

Good design will take into consideration all business, user and functional requirements and even sometimes inform new functionality & generate new requirements, based on user comments and feedback. Without watertight requirements specification to work from, much of the design is left to assumptions and subjectivity. Requirements put a project on track & provide a basis for the design. A robust design always ties back to its requirements at every step of the design process.

Although there are many ways to translate project requirements, Use cases, User Stories and Scenarios are the most frequently used methods to capture them. Some elaborate projects may have a comprehensive Business Requirements Document (BRD), which forms the absolute basis for all deliverables for that project.

I will get a bit deeper into what each of this is and in which context each one is used…

Requirements are useful for any project. Especially useful for software projects. But critical for a successful topic map project.

Topic maps can represent or omit any subject of conversation, any relationship between subjects or any other information about a subject.

Not a good practice to assume others will make the same assumptions as you about the subjects to include or what information to include about them.

They might and they might not.

For any topic maps project, insist on a requirements document.

A good requirements document results in accountability for both sides.

The client for specifying what was desired and being responsible for changes and their impacts. The topic map author for delivering on the terms and detail specified in the requirements document.

March 11, 2013

Microsoft Goes After 3 Big Data Myths

Filed under: BigData,Design — Patrick Durusau @ 2:00 pm

Microsoft Goes After 3 Big Data Myths by Jeff Bertolucci.

It’s Jeff’s coverage of the second myth that I want to mention:

The second myth, Microsoft said, pertains to the looming data scientist shortage: Enterprises can’t find enough qualified big data gurus to pull insights from unstructured information sources, such as social media feeds and machine sensor data.

“While it is true that the industry needs more data scientists, it is equally true that most organizations are equipped with the employees they need today to help them gather the valuable insights from their data that will better their business,” writes Kelly.

In other words, big data tools and apps can save the day. Microsoft’s argument ties in with the so-called democratization of data movement. Popular tools, such as Excel with the Data Explorer add-in, allow end users to perform BI analysis without having to pester IT for help.

Isn’t that similar to the difference between being able to use MS Word and being an author?

I know lots of people who can do one but not the other.

The danger from the Microsoft argument comes from staff on payroll performing poorly at BI analysis isn’t a line item in the budget. Lost opportunities never are.

On the other hand, getting competent help that uses Microsoft or other data analytic tools, is a line item in the budget.

Managers may be tempted in budget conscious times to opt for the no budget line item option.

Consider that carefully, the opportunities you lose may be your own.


Update: A better example is using MS PowerPoint does not make you a presenter. We have all sat through dead by PowerPoint presentations and will again.

March 6, 2013

Data Governance needs Searchers, not Planners

Filed under: Data Governance,Data Management,Data Models,Design — Patrick Durusau @ 7:22 pm

Data Governance needs Searchers, not Planners by Jim Harris.

From the post:

In his book Everything Is Obvious: How Common Sense Fails Us, Duncan Watts explained that “plans fail, not because planners ignore common sense, but rather because they rely on their own common sense to reason about the behavior of people who are different from them.”

As development economist William Easterly explained, “A Planner thinks he already knows the answer; A Searcher admits he doesn’t know the answers in advance. A Planner believes outsiders know enough to impose solutions; A Searcher believes only insiders have enough knowledge to find solutions, and that most solutions must be homegrown.”

I made a similar point in my post Data Governance and the Adjacent Possible. Change management efforts are resisted when they impose new methods by emphasizing bad business and technical processes, as well as bad data-related employee behaviors, while ignoring unheralded processes and employees whose existing methods are preventing other problems from happening.

If you don’t remember any line from any post you read here or elsewhere, remember this one:

“…they rely on their own common sense to reason about the behavior of people who are different from them.”

Whenever you encounter a situation where that description fits, you will find failed projects, waste and bad morale.

February 23, 2013

Failure By Design

Filed under: BigData,Design,Government,Government Data — Patrick Durusau @ 3:40 pm

Did you know the Security and Exchange Commission (SEC) is now collecting 400 gigabytes of market data daily?

Midas [Market Information Data Analytics System], which is costing the SEC $2.5 million a year, captures data such as time, price, trade type and order number on every order posted on national stock exchanges, every cancellation and modification, and every trade execution, including some off-exchange trades. Combined it adds up to billions of daily records.

So, what’s my complaint?

Midas won’t be able to fill in all of the current holes in SEC’s vision. For example, the SEC won’t be able to see the identities of entities involved in trades and Midas doesn’t look at, for example, futures trades and trades executed outside the system in what are known as “dark pools.” (emphasis added)

What?

The one piece of information that could reveal patterns of insider trading, churning, and a whole host of other securities crimes, is simply not collected.

I wonder who would benefit from the SEC not being able to track insider trading, churning, etc.?

People engaged in insider trading, churning, etc. would be my guess.

You?

Maybe someone should ask SEC chairman Elisse Walter or Gregg Berman (who oversees MIDAS) if tracking entities would help with SEC enforcement?

If they agree, then ask why not now?

For that matter, why not open up the data + entities so others can help the SEC with analysis of the data?

Obvious questions J. Nicholas Hoover should have asked for SEC Makes Big Data Push To Analyze Markets.

February 21, 2013

Why Business Intelligence Software Is Failing Business

Filed under: Design,Interface Research/Design,Usability — Patrick Durusau @ 8:37 pm

Why Business Intelligence Software Is Failing Business

From the post:

Business intelligence software is supposed to help businesses access and analyze data and communicate analytics and metrics. I have witnessed improvements to BI software over the years, from mobile and collaboration to interactive discovery and visualization, and our Value Index for Business Intelligence finds a mature set of technology vendors and products. But even as these products mature in capabilities, the majority lack features that would make them easy to use. Our recent research on next-generation business intelligence found that usability is the most important evaluation criteria for BI technology, outpacing functionality (49%) and even manageability (47%). The pathetic state of dashboards and the stupidity of KPI illustrate some of the obvious ways the software needs to improve for businesses to gain the most value from it. We need smarter business intelligence, and that means not just more advanced sets of capabilities that are designed for the analysts, but software designed for those who need to use BI information.

BI considerations

Our research finds the need to collaborate and share (67%) and inform and deliver (61%) are in the top five evaluation categories for software. A few communication improvements, highlighted below, would help organizations better utilize analytics and BI information.

Imagine that, usability is ahead of functionality.

Successful semantic software vendors will draw several lessons from this post.

« Newer PostsOlder Posts »

Powered by WordPress