Archive for the ‘Information Overload’ Category

20 Slack Apps You’ll Love

Saturday, April 9th, 2016

20 Slack Apps You’ll Love

From the post:

Slack is taking the business world by storm. More and more companies are using this communication tool—and it’s becoming an increasingly robust platform due to all of the integrations being built on top of it. Now, you can do pretty much everything in Slack—from tracking how your customers use your app, to keeping tabs on company finances at a glance, to getting a daily digest of top news from around the web.

Here are 20 of the Product Hunt community’s most-loved Slack integrations. Trust us—once you give some of these a try, you’ll wonder how you ever made it through the day without them.

I tagged this under “information overload” even though I use and enjoy Slack, the last thing I need is another app to “manage” information flow.

What I desperately need is a mechanism that filters (no cat pics), promotes important content (not necessarily the most tweeted/reposted), integrates from multiple sources/feeds (think no repetition, how many times need I see “bombing in Brussels,” I get it), provides one-touch access to a history that is also governed by the same rules.

Although I gather and process large collections of information, I can only work closely with 10 to 20 results at any given point.

More than 20 “hits” is just advertising for the “depth/breath” of your search mechanism. Or perhaps my lack of skill with your search mechanism.

You are very likely to find a useful app in this collection but if not, don’t despair! The post concludes with a link to a list of over 350 more Slack apps!

Enjoy!

This is your Brain on Big Data: A Review of “The Organized Mind”

Monday, November 17th, 2014

This is your Brain on Big Data: A Review of “The Organized Mind” by Stephen Few.

From the post:

In the past few years, several fine books have been written by neuroscientists. In this blog I’ve reviewed those that are most useful and placed Daniel Kahneman’s Thinking, Fast & Slow at the top of the heap. I’ve now found its worthy companion: The Organized Mind: Thinking Straight in the Age of Information Overload.

the organized mind - book cover

This new book by Daniel J. Levitin explains how our brains have evolved to process information and he applies this knowledge to several of the most important realms of life: our homes, our social connections, our time, our businesses, our decisions, and the education of our children. Knowing how our minds manage attention and memory, especially their limitations and the ways that we can offload and organize information to work around these limitations, is essential for anyone who works with data.

See Stephen’s review for an excerpt from the introduction and summary comments on the work as a whole.

I am particularly looking forward to reading Levitin’s take on the transfer of information tasks to us and the resulting cognitive overload.

I don’t have the volume, yet, but it occurs to me that the shift from indexes (Readers Guide to Periodical Literature and the like) and librarians to full text search engines, is yet another example of the transfer of information tasks to us.

Indexers and librarians do a better job of finding information than we do because discovery of information is a difficult intellectual task. Well, perhaps, discovering relevant and useful information is a difficult task. Almost without exception, every search produces a result on major search engines. Perhaps not a useful result but a result none the less.

Using indexers and librarians will produce a line item in someone’s budget. What is needed is research on the differential between the results from indexer/librarians versus us and what that translates to as a line item in enterprise budgets.

That type of research could influence university, government and corporate budgets as the information age moves into high gear.

The Organized Mind by Daniel J. Levitin is a must have for the holiday wish list!

Condensing News

Thursday, June 12th, 2014

Information Overload: Can algorithms help us navigate the untamed landscape of online news? by Jason Cohn.

From the post:

Digital journalism has evolved to a point of paradox: we now have access to such an overwhelming amount of news that it’s actually become more difficult to understand current events. IDEO New York developer Francis Tseng is—in his spare time—searching for a solution to the problem by exploring its root: the relationship between content and code. Tseng received a grant from the Knight Foundation to develop Argos*, an online news aggregation app that intelligently collects, summarizes and provides contextual information for news stories. Having recently finished version 0.1.0, which he calls the first “complete-ish” release of Argos, Tseng spoke with veteran journalist and documentary filmmaker Jason Cohn about the role technology can play in our consumption—and comprehension—of the news.

Great story and very interesting software. And as Alyona notes in her tweet, it’s open source!

Any number of applications, particularly for bloggers who are scanning lots of source material everyday.

Intended for online news but a similar application would be useful for TV news as well. In the Altanta, Georgia area a broadcast could be prefaced by:

  • Accidents (gristly ones) 25%
  • Crimes (various) 30%
  • News previously reported but it’s a slow day today 15%
  • News to be reported on a later broadcast 10%
  • Politics (non-contextualized posturing) 10%
  • Sports (excluding molesting stories reported under crimes) 5%
  • Weather 5%

I haven’t timed the news and some channels are worse than others but take that as a recurrent, public domain summary of Atlanta news. 😉

For digital news feeds, check out the Argos software!

I first saw this in a tweet by Alyona Medelyan.

How We Read….[Does Your Topic Map Contribute to Information Overload?]

Thursday, December 6th, 2012

How we read, not what we read, may be contributing to our information overload by Justin Ellis.

From the post:

Every day, a new app or service arrives with the promise of helping people cut down on the flood of information they receive. It’s the natural result of living in a time when an ever-increasing number of news providers push a constant stream of headlines at us every day.

But what if it’s the ways we choose to read the news — not the glut of news providers — that make us feel overwhelmed? An interesting new study out of the University of Texas looks at the factors that contribute to the concept of information overload, and found that, for some people, the platform on which news is being consumed can make all the difference between whether you feel overwhelmed.

The study, “News and the Overloaded Consumer: Factors Influencing Information Overload Among News Consumers” was conducted by Avery Holton and Iris Chyi. They surveyed more than 750 adults on their digital consumption habits and perceptions of information overload. On the central question of whether they feel overloaded with the amount of news available, 27 percent said “not at all”; everyone else reported some degree of overloaded.

The results imply that the more constrained the platform for delivery of content, the less overwhelmed users feel. Reading news on a cell phone for example. The links and videos on Facebook being at the other extreme.

Which makes me curious about information interfaces in general and topic map interfaces in particular.

Does the traditional topic map interface (think Omnigator) contribute to a feeling of information overload?

If so, how would you alter that display to offer the user less information by default but allow its expansion upon request?

Compare to a book index, which offers sparse information on a subject, that can be expanded by following a pointer to fuller treatment of a subject.

I don’t think replicating a print index with hyperlinks in place of traditional references is the best solution but it might be a starting place for consideration.

Will Data Storage Make Us Dumber?

Wednesday, October 10th, 2012

Coming to a data center and then desk top near you:

Case Western Reserve University researchers have developed technology aimed at making an optical disc that holds 1 to 2 terabytes of data – the equivalent of 1,000 to 2,000 copies of Encyclopedia Britannica. The entire print collection of the Library of Congress could fit on five to 10 discs.

Only a matter of time before you have the Library of Congress on a single disk on your local computer. All of it.

Questions:

  • Can you find useful information about a subject?
  • If you find it once, can you find it again?
  • If you can find it again, how much work does it take?
  • Can you share your trail of discovery or “bread crumbs” with others?

If TB data storage means you can’t find information, doesn’t that mean you are getting dumber, one TB at a time?

Storage density isn’t going to slow down so we had better start working on search/IR.

See: Making computer data storage cheaper and easier

Are You An IT Hostage?

Monday, August 13th, 2012

As I promised last week in From Overload to Impact: An Industry Scorecard on Big Data Business Challenges [Oracle Report], the key finding that is missing from Oracle’s summary:

Executives’ Biggest Data Management Gripes:*

#1 Don’t have the right systems in place to gather the information we need (38%)

#2 Can’t give our business managers access to the information they need; need to rely on IT (36%)

Ask your business managers: Do they feel like IT hostages?

You are likely to be surprised at the answers you get.

IT’s vocabulary acts as an information clog.

A clog that impedes the flow of information in your organization.

Information that can improve the speed and quality of business decision making.

The critical point is: Information clogs are bad for business.

Do you want to borrow my plunger?

From Overload to Impact: An Industry Scorecard on Big Data Business Challenges [Oracle Report]

Friday, August 10th, 2012

From Overload to Impact: An Industry Scorecard on Big Data Business Challenges [Oracle Report]

Summary:

IT powers today’s enterprises, which is particularly true for the world’s most data-intensive industries. Organizations in these highly specialized industries increasingly require focused IT solutions, including those developed specifically for their industry, to meet their most pressing business challenges, manage and extract insight from ever-growing data volumes, improve customer service, and, most importantly, capitalize on new business opportunities.

The need for better data management is all too acute, but how are enterprises doing? Oracle surveyed 333 C-level executives from U.S. and Canadian enterprises spanning 11 industries to determine the pain points they face regarding managing the deluge of data coming into their organizations and how well they are able to use information to drive profit and growth.

Key Findings:

  • 94% of C-level executives say their organization is collecting and managing more business information today than two years ago, by an average of 86% more
  • 29% of executives give their organization a “D” or “F” in preparedness to manage the data deluge
  • 93% of executives believe their organization is losing revenue – on average, 14% annually – as a result of not being able to fully leverage the information they collect
  • Nearly all surveyed (97%) say their organization must make a change to improve information optimization over the next two years
  • Industry-specific applications are an important part of the mix; 77% of organizations surveyed use them today to run their enterprise—and they are looking for more tailored options

What key finding did they miss?

They cover it in the forty-two (42) page report but it doesn’t appear here.

Care to guess what it is?

Forgotten key finding post coming Monday, 13 August 2012. Watch for it!

I first saw this at Beyond Search.

History of Information Organization (Infographic)

Thursday, March 8th, 2012

From Cartography to Card Catalogs [Infographic]: History of Information Organization

Mindjet has posted an infographic and blog post about the history of information organization. I have embedded the graphic below.

Let me preface my remarks by saying I have known people at Mindjet and it is a fairly remarkable organization. And to be fair, the history of information organization is of interest to me, although I am far from being a specialist in the field.

However, when a graphic jumps from “850 CE The First Byzantine Encyclopedia,” to “1276 CE Oldest Continuously Functioning Library” and informs the reader on the edge in between that was “3,000 years ago,” it seems to be lacking in precision or proofing, perhaps both.

Although information has to be summarized for such a presentation, I thought the rise of writing in Egypt/Sumeria would have merited a note, perhaps the library of Ashurbanipal (first library of the ancient Middle East) or the Library of Alexandria, just to name two. Noting you would have to go before Ashurbanipal to get 3,000 years ago. And there were written texts and collections of such texts for anywhere from 2,000 to 3,000 years before that.

I do appreciate that Mindjet doesn’t think information issues arose with the digital computer. I am hopeful that they will encourage a re-examination of older methods and solutions in hopes of finding clues to new solutions.

The concept of “aboutness” in subject indexing

Wednesday, October 5th, 2011

I just finished reading a delightful paper by W. J. Hutchins, ‘The concept of “aboutness” in subject indexing,’ which was presented at a Colloquium on Aboutness held by the Co-ordinate Indexing Group, 18 April 1977 and was reprinted in Readings in Information Retrieval, edited by Karen Sparck Jones and Peter Willett, Morgan Kaufman Publishers, Inc., San Francisco, California, 1997.

I discovered the paper in a hard copy of Readings in Information Retrieval, but it is also online, The concept of “aboutness” in subject indexing.

Hutchins writes in his abstract:

The common view of the ‘aboutness’ of documents is that the index entries (or classifications) assigned to documents represent or indicate in some way the total contents of documents; indexing and classifying are seen as processes involving the ‘summarization’ of the texts of documents. In this paper an alternative concept of ‘aboutness’ is proposed based on an analysis of the linguistic organization of texts, which is felt to be more appropriate in many indexing environments (particularly in non-specialized libraries and information services) and which has implications for the evaluation of the effectiveness of indexing systems.

You can read the details of how he suggests discovering the “aboutness” of documents but I was struck by his observation that the ‘summarization’ practice furthers the end of exhaustive search. Under Objectives of indexing, Hutchins says:

In the context of the special library and similarly specialized information services, the ‘summarization’ approach to subject indexing is most appropriate. Indexers are generally able to define clearly the interests and levels of knowledge of the readers they are serving; they are thus able to produce ‘summaries’ biased in the most helpful directions for their readers. More importantly, indexers can normally assume that most users are already very knowledgeable on most of the topics they look for in the indexes provided. They can assume that the usual search is for references to all documents treating a particular topic, since any one may have something ‘new’ to say about it that the reader did not know before. The fact that some references will lead users to texts which tell them nothing they did not previously know should not normally worry them unduly—it is the penalty they expect to pay for the assurance that the search has been as exhaustive as feasible.

Exhaustive search is one type of search that drives tests for the success of indexing:

The now traditional parameters of ‘recall’, ‘precision’ and ‘fallout’ are clearly valid for systems in which success is measured in terms of the ability to retrieve all documents which have something to say on a particular topic—that is to say, in systems based on the ‘summarization’ approach.*

You could say that full-text indexing/searching is different from ‘summarization’ by a professional indexer, but is it? Or have we simply substituted non-professional indexers into the process?

With ‘summarization,’ a professional indexer chooses terms that represent the content of a document. With full-text searching, the terms chosen on an ad-hoc basis by a user come to represent a ‘summary’ of entire documents. And in both cases, all the documents so summarized are returned to the user, in other words, the search is exhaustive.

Google/Bing/Yahoo! searches are examples of exhaustive searches of little value. I can find two or three thousand (2000-3000) new pages of material relevant to topic map issues everyday. Can you say information overload?

Or is that information volume overload? That out of the two or three thousand (2000-3000) pages per day, probably more like fifty to one hundred (50-100) pages are worth my attention. That is what “old-style” indexing brought to the professional researcher.