Archive for the ‘Marketing’ Category

Apologies For Silence – Ergonomics Problem

Tuesday, July 25th, 2017

Apologies for the sudden silence!

I have had a bad situation, ergonomically speaking, which manifested itself in my left hand.

Have isolated the problem and repairs/exercises are underway. Won’t be back to full speed for several months but will be trying to do better.

Hope you are having a great summer!

If You Can’t See The Data, The Statistics Are False

Saturday, June 10th, 2017

The headline, If You Can’t See The Data, The Statistics Are False is my one line summary of 73.6% of all Statistics are Made Up – How to Interpret Analyst Reports by Mark Suster.

You should read Suster’s post in full, if for no other reason that his accounts of how statistics are created, that’s right, created, for reports:


But all of the data projections were so different so I decided to call some of the research companies and ask how they derived their data. I got the analyst who wrote one of the reports on the phone and asked how he got his projections. He must have been about 24. He said, literally, I sh*t you not, “well, my report was due and I didn’t have much time. My boss told me to look at the growth rate average over the past 3 years an increase it by 2% because mobile penetration is increasing.” There you go. As scientific as that.

I called another agency. They were more scientific. They had interviewed telecom operators, handset manufacturers and corporate buyers. They had come up with a CAGR (compounded annual growth rate) that was 3% higher that the other report, which in a few years makes a huge difference. I grilled the analyst a bit. I said, “So you interviewed the people to get a plausible story line and then just did a simple estimation of the numbers going forward?”

“Yes. Pretty much”

Write down the name of your favorite business magazine.

How many stories have you enjoyed over the past six months with “scientific” statistics like those?

Suster has five common tips for being a more informed consumer of data. All of which require effort on your part.

I have only one, which requires only reading on your part:

Can you see the data for the statistic? By that I mean is the original data, its collection method, who collected it, method of collection, when it was collected, etc., available to the reader?

If not, the statistic is either false or inflated.

The test I suggest is applicable at the point where you encounter the statistic. It puts the burden on the author who wants their statistic to be credited, to empower the user to evaluate their statistic.

Imagine the data analyst story where the growth rate statistic had this footnote:

1. Averaged growth rate over past three (3) years and added 2% at direction of management.

It reports the same statistic but also warns the reader the result is a management fantasy. Might be right, might be wrong.

Patronize publications with statistics + underlying data. Authors and publishers will get the idea soon enough.

Congressional Fact Laundering

Thursday, May 4th, 2017

How a Fake Cyber Statistic Raced Through Washington by Joseph Marks.

The statistic you are about to read is false:


The statistic, typically attributed to the National Cyber Security Alliance, is that 60 percent of small businesses that suffer a cyberattack will go out of business within six months.

It appears in a House bill that won unanimous support from that chamber’s Science Committee this week, cited as evidence the federal government must devote more resources to helping small businesses shore up their cybersecurity. It’s also in a companion Senate bill that sailed through the Commerce Committee in April.

Both bills require the government’s cyber standards agency, the National Institute of Standards and Technology, to devote more of its limited resources to creating cybersecurity guidance for small businesses.

Federal Trade Commissioner Maureen Ohlhausen cited the figure in testimony before the House Small Business Committee in March, as did Charles Romine, director of NIST’s Information Technology Laboratory.

Sen. Jeanne Shaheen, D-N.H., ranking member on the Senate Small Business Committee, cited the figure in a letter to Amazon asking the internet commerce giant what it was doing to improve cybersecurity for its third-party sellers.

Reminder: The 60 percent of small businesses that suffer a cyberattack will go out of business within six months statement is FALSE.

The bulk of the article is an amusing romp through various parties attempting to deny they were the source of the false information and/or that the presence of false information had any impact on the legislation.

The second part, that false information had no impact on the legislation seems plausible to me. Legislation rarely has any relationship to information true or false so I can understand why false information doesn’t trouble those cited.

Congressional hearing documents could simply repeat the standard Lorem Ipsum:

“Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.”

It has as much of a relationship to any legislation Congress passes as the carefully published committee hearings.

There is an upside to Joseph’s story:


The size and expertise of congressional staffs who write and vet legislation have also steadily diminished over time as have the staffs of congressional services such as the Government Accountability Office and the Congressional Research Service designed to provide Congress with authoritative data.

“Basically, [congressional staffers] have less expertise available to them, are more reliant on what other people tell them and it’s much easier for erroneous information to get into the political system,” said Daniel Schuman, a former House and Senate staffer who also worked for the Congressional Research Service and is now policy director for Demand Progress, a left-leaning internet rights and open government organization.

It’s what I call “fact laundering.” It’s like money laundering but legal.

You load your member of Congress up with fake facts, which they cite (without naming you), which are spread by other people (with no checking), cited by other members of congress and agencies, and in just weeks, you have gone from a false fact to a congressional fact.

An added bonus, even when denied, a congressional fact can become stronger.

Facts on demand as it were.

Addictive Technology (And the Problem Is?)

Thursday, May 4th, 2017

Tech Companies are Addicting People! But Should They Stop? by Nir Eyal.

From the post:

To understand technology addiction (or any addiction for that matter) you need to understand the Q-tip. Perhaps you’ve never noticed there’s a scary warning on every box of cotton swabs that reads, “CAUTION: Do not enter ear canal…Entering the ear canal could cause injury.” How is it that the one thing most people do with Q-tips is the thing manufacturers explicitly warn them not to do?

“A day doesn’t go by that I don’t see people come in with Q-tip-related injuries,” laments Jennifer Derebery, an inner ear specialist in Los Angeles and the past president of the American Academy of Otolaryngology. “I tell my husband we ought to buy stock in the Q-tips company; it supports my practice.” It’s not just that people do damage to their ears with Q-tips, it’s that they keep doing damage. Some even call it an addiction.

On one online forum, a user asks, “Anyone else addicted to cleaning their ears with Q-tips?…I swear to God if I go more than a week without sticking Q-tips in my ears, I go nuts. It’s just so damn addicting…” Elsewhere, another ear-canal enterer also associates ear swabbing with dependency: “How can I detox from my Q-tips addiction?” The phenomenon is so well known that MADtv based a classic sketch on a daughter having to hide Q-tip use from her parents like a junkie.

Q-tip addiction shares something in common with other, more prevalent addictions like gambling, heroin, and even Facebook use. Understanding what I call, the Q-tip Effect, raises important questions about products we use every day, and the responsibilities their makers have in relation to the welfare of their users.
… (emphasis in original)

It’s a great post on addiction (read the definition), technology, etc., but Nir loses me here:


However, there’s a difference between accepting the unavoidable edge cases among unknown users and knowingly promoting the Q-tip Effect. When it comes to companies that know exactly who’s using, how, and how much, much more can be done. To do the right thing by their customers, companies have an obligation to help when they know someone wants to stop, but can’t. Silicon Valley technology companies are particularly negligent by this ethical measure.

The only basis for this “…obligation to help when they know someone wants to stop, but can’t” appears to be Nir’s personal opinion.

That’s ok and he is certainly entitled to it, but Nir hasn’t offered to pay the cost of meeting his projected ethical obligation.

People enjoy projecting ethical obligations on others, from the anti-abortion, anti-birth control, anti-drugs, etc.

Imposing moral obligations that others pay for is more popular in the U.S. than adultery. I don’t have any hard numbers on that last point. Let’s say imposing moral obligations paid for by others is wildly popular and leave it at that.

If I had a highly addictive (in Nir’s sense) app, I would be using the profits to rent backhoes for anyone who needed one along the DAPL pipeline. No questions asked.

It’s an absolute necessity to raise ethical questions about technology and society in general.

But my first question is always: Who pays the cost of your ethical concern?

If it’s not you, that says a lot to me about your concern.

Building an Online Profile:… [Toot Your Own Horn]

Thursday, February 23rd, 2017

Building an Online Profile: Social Networking and Amplification Tools for Scientists by Antony Williams.

Seventy-seven slides from a February 22, 2017 presentation at NC State University on building an online profile.

Pure gold, whether you are building your profile or one for alternate identity. 😉

I like this slide in particular:

Take the “toot your own horn” advice to heart.

Your posts/work will never be perfect so don’t wait for that before posting.

Any errors you make are likely to go unnoticed until you correct them.

How to Help Trump

Wednesday, December 21st, 2016

How to Help Trump by George Lakoff.

From the post:

Without knowing it, many Democrats, progressives and members of the news media help Donald Trump every day. The way they help him is simple: they spread his message.

Think about it: every time Trump issues a mean tweet or utters a shocking statement, millions of people begin to obsess over his words. Reporters make it the top headline. Cable TV panels talk about it for hours. Horrified Democrats and progressives share the stories online, making sure to repeat the nastiest statements in order to refute them. While this response is understandable, it works in favor of Trump.

When you repeat Trump, you help Trump. You do this by spreading his message wide and far.

I know Lakoff from his Women, Fire, and Dangerous Things: What Categories Reveal about the Mind.

I haven’t read any of his “political” books but would buy them sight unseen on the strength of Women, Fire, and Dangerous Things.

Lakoff promises a series of posts using effective framing to “…expose and undermine Trump’s propaganda.”

Whether you want to help expose Trump or use framing to promote your own produce or agenda, start following Lakoff today!

Preserving Ad Revenue With Filtering (Hate As Renewal Resource)

Monday, November 21st, 2016

Facebook and Twitter haven’t implemented robust and shareable filters for their respective content streams for fear of disturbing their ad revenue streams.* The power to filter feared as the power to exclude ads.

Other possible explanations include: Drone employment, old/new friends hired to discuss censoring content; Hubris, wanting to decide what is “best” for others to see and read; NIH (not invented here), which explains silence concerning my proposals for shareable content filters; others?

* Lest I be accused of spreading “fake news,” my explanation for the lack of robust and shareable filters on content on Facebook and Twitter is based solely on my analysis of their behavior and not any inside leaks, etc.

I have a solution for fearing filters as interfering with ad revenue.

All Facebook posts and Twitter tweets, will be delivered with an additional Boolean field, ad, which defaults to true (empty field), meaning the content can be filtered. (following Clojure) When the field is false, that content cannot be filtered.

Filters being registered and shared via Facebook and Twitter, testing those filters for proper operation (and not applying them if they filter ad content) is purely an algorithmic process.

Users pay to post ad content, a step where the false flag can be entered, resulting in no more ad freeloaders being free from filters.

What’s my interest? I’m interested in the creation of commercial filters for aggregation, exclusion and creating a value-add product based on information streams. Moreover, ending futile and bigoted attempts at censorship seems like a worthwhile goal to me.

The revenue potential for filters is nearly unlimited.

The number of people who hate rivals the number who want to filter the content seen by others. An unrestrained Facebook/Twitter will attract more hate and “fake news,” which in turn will drive a great need for filters.

Not a virtuous cycle but certainly a profitable one. Think of hate and the desire to censor as renewable resources powering that cycle.

PS: I’m not an advocate for hate and censorship but they are both quite common. Marketing is based on consumers as you find them, not as you wish they were.

Are You A Moral Manipulator?

Thursday, September 29th, 2016

I appreciated Nir’s reminder about the #1 rule for drug dealers.

If you don’t know it, the video is only a little over six minutes long.

Enjoy!

Why OrientDB?

Tuesday, September 6th, 2016

Why OrientDB?

From the webpage:

Understanding the strengths, limitations and trade-offs among the leading DBMS options can be DIS-ORIENTING. Developers have grown tired of making compromises in speed and flexibility or supporting several DBMS products to satisfy their use case requirements.

Thus, OrientDB was born: the first Multi-Model Open Source NoSQL DBMS that combines the power of graphs and the flexibility of documents into one scalable, high-performance operational database.

In addition to great software, OrientDB also has a clever marketing department:

orientdb-tweet-640

That’s an image from an OrientDB tweet that sends you to the Why OrientDB? page.

What’s your great image to gain attention?

PS: I remember one from an IT zine in the 1990’s where employee’s were racing around the office on fire. Does that ring a bell with anyone? Seems like it was one of the large format, Computer Shopper size zines.

Functor Fact @FunctorFact [+ Tip for Selling Topic Maps]

Tuesday, June 28th, 2016

JohnDCook has started @FunctorFact, tweets “..about category theory and functional programming.”

John has a page listing his Twitter accounts. It needs to be updated to reflect the addition of @FunctorFact.

BTW, just by accident I’m sure, John’s blog post for today is titled: Category theory and Koine Greek. It has the following lesson for topic map practitioners and theorists:


Another lesson from that workshop, the one I want to focus on here, is that you don’t always need to convey how you arrived at an idea. Specifically, the leader of the workshop said that if you discover something interesting from reading the New Testament in Greek, you can usually present your point persuasively using the text in your audience’s language without appealing to Greek. This isn’t always possible—you may need to explore the meaning of a Greek word or two—but you can use Greek for your personal study without necessarily sharing it publicly. The point isn’t to hide anything, only to consider your audience. In a room full of Greek scholars, bring out the Greek.

This story came up in a recent conversation about category theory. You might discover something via category theory but then share it without discussing category theory. If your audience is well versed in category theory, then go ahead and bring out your categories. But otherwise your audience might be bored or intimidated, as many people would be listening to an argument based on the finer points of Koine Greek grammar. Microsoft’s LINQ software, for example, was inspired by category theory principles, but you’d be hard pressed to find any reference to this because most programmers don’t want to know or need to know where it came from. They just want to know how to use it.

Sure, it is possible to recursively map subject identities in order to arrive at a useful and maintainable mapping between subject domains, but the people with the checkbook are only interested in a viable result.

How you got there could involve enslaved pixies for all they care. They do care about negative publicity so keep your use of pixies to yourself.

Looking forward to tweets from @FunctorFact!

Visualizing Data Loss From Search

Thursday, April 14th, 2016

I used searches for “duplicate detection” (3,854) and “coreference resolution” (3290) in “Ironically, Entity Resolution has many duplicate names” [Data Loss] to illustrate potential data loss in searches.

Here is a rough visualization of the information loss if you use only one of those terms:

duplicate-v-coreference-500-clipped

If you search for “duplicate detection,” you miss all the articles shaded in blue.

If you search for “coreference resolution,” you miss all the articles shaded in yellow.

Suggestions for improving this visualization?

It is a visualization that could be performed on client’s data, using their search engine/database.

In order to identify the data loss they are suffering now from search across departments.

With the caveat that not all data loss is bad and/or worth avoiding.

Imaginary example (so far): What if you could demonstrate no overlapping of terminology for two vendors for the United States Army and the Air Force. That is no query terms for one returned useful results for the other.

That is a starting point for evaluating the use of topic maps.

While the divergence in terminologies is a given, the next question is: What is the downside to that divergence? What capability is lost due to that divergence?

Assuming you can identify such a capacity, the next question is to evaluate the cost of reducing and/or eliminating that divergence versus the claimed benefit.

I assume the most relevant terms are going to be those internal to customers and/or potential customers.

Interest in working this up into a client prospecting/topic map marketing tool?


Separately I want to note my discovery (you probably already knew about it) of VennDIS: a JavaFX-based Venn and Euler diagram software to generate publication quality figures. Download here. (Apologies, the publication itself if firewalled.)

The export defaults to 800 x 800 resolution. If you need something smaller, edit the resulting image in Gimp.

It’s a testimony to the software that I was able to produce a useful image in less than a day. Kudos to the software!

Growthverse

Thursday, March 10th, 2016

Growthverse

From the webpage:

Growthverse was built for marketers, by marketers, with input from more than 100 CMOs.

Explore 800 marketing technology companies (and growing).

I originally arrived at this site here.

Interesting visualization that may result in suspects (their not prospects until you have serious discussions) for topic map based tools.

The site says the last update was in September 2015 so take heed that the data is stale by about six months.

That said, its easier than hunting down the 800+ companies on your own.

Good hunting!

Writing Clickbait TopicMaps?

Wednesday, January 20th, 2016

‘Shocking Celebrity Nip Slips’: Secrets I Learned Writing Clickbait Journalism by Kate Lloyd.

I’m sat at a desk in a glossy London publishing house. On the floors around me, writers are working on tough investigations and hard news. I, meanwhile, am updating a feature called “Shocking celebrity nip-slips: boobs on the loose.” My computer screen is packed with images of tanned reality star flesh as I write captions in the voice of a strip club announcer: “Snooki’s nunga-nungas just popped out to say hello!” I type. “Whoops! Looks like Kim Kardashian forgot to wear a bra today!”

Back in 2013, I worked for a women’s celebrity news website. I stumbled into the industry at a time when online editors were panicking: Their sites were funded by advertisers who demanded that as many people as possible viewed stories. This meant writing things readers loved and shared, but also resorting to shadier tactics. With views dwindling, publications like mine often turned to the gospel of search engine optimisation, also known as SEO, for guidance.

Like making a deal with a highly-optimized devil, relying heavily on SEO to push readers to websites has a high moral price for publishers. When it comes to female pop stars and actors, people are often more likely to search for the celebrity’s name with the words “naked,” “boobs,” “butt,” “weight,” and “bikini” than with the names of their albums or movies. Since 2008, “Miley Cyrus naked” has been consistently Googled more than “Miley Cyrus music,” “Miley Cyrus album,” “Miley Cyrus show,” and “Miley Cyrus Instagram.” Plus, “Emma Watson naked” has been Googled more than “Emma Watson movie” since she was 15. In fact, “Emma Watson feet” gets more search traffic than “Emma Watson style,” which might explain why one women’s site has a fashion feature called “Emma Watson is an excellent foot fetish candidate.”

If you don’t know what other people are be searching for, try these two resources on Google Trends:

Hacking the Google Trends API (2014)

PyTrends – Pseudo API for Google Trends (Updated six days ago)

Depending on your sensibilities, you could collect content on celebrities into a topic map and when their searches spike, you can release links to the new material plus save readers the time of locating older content.

That might even be a viable business model.

Thoughts?

Building Web Apps Using Flask and Neo4j [O’Reilly Market Pricing For Content?]

Saturday, January 16th, 2016

Building Web Apps Using Flask and Neo4j

When I first saw this on one of my incoming feeds today I thought it might be of interest.

When I followed the link, I found an O’Reilly video, which broke out to be:

25:23 free minutes and 133:01 minutes for $59.99.

Rounding down that works out to about $30/hour for the video.

When you compare that to other links I saw today:

What is the value proposition that sets the price on an O’Reilly video?

So far as I can tell, pricing for content on the Internet is similar the pricing of seats on airlines.

Pricing for airline seats is beyond “arbitrary” or “capricious.” More akin to “absurd” and/or “whatever a credulous buyer will pay.”

Speculations on possible pricing models O’Reilly is using?

Suggestions on a viable pricing model for content?

Successful Cyber War OPS As Of 2016.01.05 – (But Fear Based Marketing Works)

Thursday, January 14th, 2016

From the text just below the interactive map:

This map lists all unclassified Cyber Squirrel Operations that have been released to the public that we have been able to confirm. There are many more executed ops than displayed on this map however, those ops remain classified.

You can select by squirrel or other animal, year, even month and the map shows successful cyber operations.

Squirrels are in the lead with 623 successes, versus one success by the United States (Stuxnet).

Be careful who you show this map.

Any sane person will laugh and agree that squirrels are a larger danger to the U.S. power grid than any fantasized terrorist.

On the other hand, non-laughing people are making money from speaking engagements, consultations, government contracts, etc., all premised on fear of terrorists attacking the U.S. power grid.

People who laugh at the Cyber Squirrel 1 map, not so much.

They say it is the lizard part of your brain that controls “…fight, flight, feeding, fear, freezing-up, and fornication.

That accords with my view that if we aren’t talking about fear, greed or sex, then we aren’t talking about marketing.

Are you willing to promote world views and uses of technology (think big data) that you know are in fact false or useless? At least in the current fear of terrorists mode, its nearly a guarantee to a payday.

Or are you looking for work from employers who realizes if you are willing to lie in order to gain a contract or consulting gig, you are very willing to lie to them as well?

Your call.

PS: You can get CyberSquirrel1 Unit Patches, 5 for $5.00, but if you put them on your laptop, you may have to leave it at home, depending upon the client.

Intuition, deliberation, and the evolution of cooperation [hackathons for example?]

Monday, January 11th, 2016

Intuition, deliberation, and the evolution of cooperation by Adam Bear and David G. Rand.

Significance:

The role of intuition versus deliberation in human cooperation has received widespread attention from experimentalists across the behavioral sciences in recent years. Yet a formal theoretical framework for addressing this question has been absent. Here, we introduce an evolutionary game-theoretic model of dual-process agents playing prisoner’s dilemma games. We find that, across many types of environments, evolution only ever favors agents who (i) always intuitively defect, or (ii) are intuitively predisposed to cooperate but who, when deliberating, switch to defection if it is in their self-interest to do so. Our model offers a clear explanation for why we should expect deliberation to promote selfishness rather than cooperation and unifies apparently contradictory empirical results regarding intuition and cooperation.

Abstract:

Humans often cooperate with strangers, despite the costs involved. A long tradition of theoretical modeling has sought ultimate evolutionary explanations for this seemingly altruistic behavior. More recently, an entirely separate body of experimental work has begun to investigate cooperation’s proximate cognitive underpinnings using a dual-process framework: Is deliberative self-control necessary to reign in selfish impulses, or does self-interested deliberation restrain an intuitive desire to cooperate? Integrating these ultimate and proximate approaches, we introduce dual-process cognition into a formal game-theoretic model of the evolution of cooperation. Agents play prisoner’s dilemma games, some of which are one-shot and others of which involve reciprocity. They can either respond by using a generalized intuition, which is not sensitive to whether the game is one-shot or reciprocal, or pay a (stochastically varying) cost to deliberate and tailor their strategy to the type of game they are facing. We find that, depending on the level of reciprocity and assortment, selection favors one of two strategies: intuitive defectors who never deliberate, or dual-process agents who intuitively cooperate but sometimes use deliberation to defect in one-shot games. Critically, selection never favors agents who use deliberation to override selfish impulses: Deliberation only serves to undermine cooperation with strangers. Thus, by introducing a formal theoretical framework for exploring cooperation through a dual-process lens, we provide a clear answer regarding the role of deliberation in cooperation based on evolutionary modeling, help to organize a growing body of sometimes-conflicting empirical results, and shed light on the nature of human cognition and social decision making.

Guidance for the formation of new communities, i.e., between strangers?

Critically, selection never favors agents who use deliberation to override selfish impulses: Deliberation only serves to undermine cooperation with strangers.

How would you motivate the non-deliberative formation of an online community for creating a topic map?

It just occurred to me, is the non-deliberative principle in play at hackathons? Where there are strangers but not sufficient time or circumstances to deliberate on your contribution and return on that contribution?

Hackathons, the ones I have read about, tend to be physical, summer camp type events. Is physical presence and support a key?

If you were going to do a topic map hackathon, physical or online, what would be its focus?

I first saw this in a tweet by Steve Strogatz.

The Truth About Change (Management, Social, Technical)

Friday, January 8th, 2016

Open Mind tweeted the comment “Accurate” along with this image:

change-want

To make this a true triple, a third frame should read:

Who wants someone else to change?

Then you would see all the hands in the air again.

Taking resistance to change as given, how do you adapt to that for marketing purposes?

Going Viral in 2016

Tuesday, December 29th, 2015

How To Go Viral: Lessons From The Most Shared Content of 2015 by Steve Rayson.

I offer this as at least as amusing as it may be useful.

The topic element of a viral post is said to include:

Trending topic (e.g. Zombies), Health & fitness, Cats & Dogs, Babies, Long Life, Love

Hard to get any of those in with technical blog but I could try:

TM’s produce healthy and fit ED-free 90 year-old bi-sexuals with dogs & cats as pets who love all non-Zombies.

That’s 115 characters if you are counting.

Produce random variations on that until I find one that goes viral. 😉

But, I have never cared for click-bait or false advertising. Personally I find it insulting when marketers falsify research.

I may have to document some of those cases in 2016. There is no shortage of it.

None of my tweets may go viral in 2016 but Steve’s post will make it more likely they will be re-tweeted.

Feel free to re-use my suggested tweet as I am fairly certain that “…healthy and fit ED-free 90 year-old bi-sexuals…” is in the public domain.

‘Linked data can’t be your goal. Accomplish something’

Friday, December 18th, 2015

Tim Strehle points to his post: Jonathan Rochkind: Linked Data Caution, which is a collection of quotes from Linked Data Caution (Jonathan Rochkind).

In the process, Tim creates his own quote, inspired by Rochkind:

‘Linked data can’t be your goal. Accomplish something’

Which is easy to generalize to:

‘***** can’t be your goal. Accomplish something’

Whether your hobby horse is linked data, graphs, noSQL, big data, or even topic maps, technological artifacts are just and only that, artifacts.

Unless and until such artifacts accomplish something, they are curios, relics venerated by pockets of the faithful.

Perhaps marketers in 2016 should be told:

Skip the potential benefits of your technology. Show me what it has accomplished (past tense) for users similar to me.

With that premise, you could weed through four or five vendors in a morning. 😉

Connecting News Stories and Topic Maps

Monday, November 16th, 2015

New WordPress plug-in Catamount aims to connect data sets and stories by Mădălina Ciobanu.

From the post:

Non-profit news organisation VT Digger, based in the United States, is building an open-source WordPress plug-in that can automatically link news stories to relevant information collected in data sets.

The tool, called Catamount, is being developed with a $35,000 (£22,900) grant from Knight Foundation Prototype Fund, and aims to give news organisations a better way of linking existing data to their daily news coverage.

Rather than hyperlinking a person’s name in a story and sending readers to a different website, publishers can use the open-source plug-in to build a small window that pops up when readers hover over a selected section of the text.

“We have this great data set, but if people don’t know it exists, they’re not going to be racing to it every single day.

“The news cycle, however, provides a hook into data,” Diane Zeigler, publisher at VT Digger, told Journalism.co.uk.

If a person is mentioned in a news story and they are also a donor, candidate or representative of an organisation involved in campaign finance, for example, an editor would be able to check the two names coincide, and give Catamount permission to link the individual to all relevant information that exists in the database.

A brief overview of this information will then be available in a pop-up box, which readers can click in order to access the full data in a separate browser window or tab.

“It’s about being able to take large data sets and make them relevant to a daily news story, so thinking about ‘why does it matter that this data has been collected for years and years’?

“In theory, it might just sit there if people don’t have a reason to draw a connection,” said Zeigler.

While Catamount only works with WordPress, the code will be made available for publishers to customise and integrate with their own content management systems.

VTDigger.org reports on the grant and other winners in Knight Foundation awards $35,000 grant to VTDigger.

Assuming that the plugin will be agnostic as to the data source, this looks like an excellent opportunity to bind topic map managed content to news stories.

You could, I suppose, return one of those dreary listings of all the prior related stories from a news source.

But that is always a lot of repetitive text to wade through for very little gain.

If you curated content with a topic map, excerpting paragraphs from prior stories when necessary for quotes, that would be a high value return for a user following your link.

Since the award was made only days ago I assume there isn’t much to be reported on the Catamount tool, as of yet. I will be following the project and will report back when something testable surfaces.

I first saw this story in an alert from Journalism.co.uk. If you aren’t already following them you should be.

Howler Monkeys with the Louder Voices have Smaller Testicles

Saturday, October 24th, 2015

Howler Monkeys with the Louder Voices have Smaller Testicles by Donald V. Morris.

This was too funny to pass up.

Reminds me of pitch people for technologies that gloss over the details and distort reality beyond mere exaggeration.

Claims of impending world domination when your entire slice of the market for a type of technology is less than one percent for example. That not “impending” in any recognizable sense of the word.

Add your own commentary/remarks and pass this along to your co-workers.

I first saw this in a tweet by Violet Blue.

PS: Yes, I saw that Howler monkeys with smaller testicles live with harems. Consider that a test of how many people will forward the article without reading it first. 😉

Subjects For Less Obscure Topic Maps?

Saturday, June 27th, 2015

A new window into our world with real-time trends

From the post:

Every journey we take on the web is unique. Yet looked at together, the questions and topics we search for can tell us a great deal about who we are and what we care about. That’s why today we’re announcing the biggest expansion of Google Trends since 2012. You can now find real-time data on everything from the FIFA scandal to Donald Trump’s presidential campaign kick-off, and get a sense of what stories people are searching for. Many of these changes are based on feedback we’ve collected through conversations with hundreds of journalists and others around the world—so whether you’re a reporter, a researcher, or an armchair trend-tracker, the new site gives you a faster, deeper and more comprehensive view of our world through the lens of Google Search.

Real-time data

You can now explore minute-by-minute, real-time data behind the more than 100 billion searches that take place on Google every month, getting deeper into the topics you care about. During major events like the Oscars or the NBA Finals, you’ll be able to track the stories most people are searching for and where in the world interest is peaking. Explore this data by selecting any time range in the last week from the date picker.

Follow @GoogleTrends for tweets about new data sets and trends.

See GoogleTrends at: https://www.google.com/trends/

This has been in browser tab for several days. I could not decide if it was eye candy or something more serious.

After all, we are talking about searches ranging from experts to the vulgar.

I went an visited today’s results at Google Trends, and found:

  • 5 Crater of Diamonds State Park, Arkansas
  • 17 Ted 2, Jurassic World
  • 22 World’s Ugliest Dog Contest [It doesn’t say if Trump entered or not.]
  • 35 Episcopal Church
  • 48 Grace Lee Boggs
  • 59 Raquel Welch
  • 68 Dodge, Mopar, Dodge Challenger
  • 79 Xbox One, Xbox, Television
  • 86 Escobar: Paradise Lost, Pablo Escobar, Benicio del Toro
  • 98 Islamic State of Iraq and the Levant

I was glad to see Raquel Welch was in the top 100 but saddened that she was out scored by the Episcopal Church. That has to sting.

When I think of topic maps that I can give you as examples, they involve taxes, Castrati, and other obscure topics. My favorite use case is an ancient text annotated with commentaries and comparative linguistics based on languages no longer spoken.

I know what interests me but not what interests other people.

Thoughts on using Google Trends to pick “hot” topics for topic mapping?

People Don’t Want Something Truly New,…

Sunday, June 21st, 2015

People Don’t Want Something Truly New, They Want the Familiar Done Differently by Nir Eyal.

From the post:

I’ll admit, the bento box is an unlikely place to learn an important business lesson. But consider the California Roll — understanding the impact of this icon of Japanese dining can make all the difference between the success or failure of your product.

If you’ve ever felt the frustration of customers not biting, then you can sympathize with Japanese restaurant owners in America during the 1970s. Sushi consumption was all but non-existent. By all accounts, Americans were scared of the stuff. Eating raw fish was an aberration and to most, tofu and seaweed were punch lines, not food.

Then came the California Roll. While the origin of the famous maki is still contested, its impact is undeniable. The California Roll was made in the USA by combining familiar ingredients in a new way. Rice, avocado, cucumber, sesame seeds, and crab meat — the only ingredient unfamiliar to the average American palate was the barely visible sliver of nori seaweed holding it all together.

The success story of introducing Americans to the consumption of sushi, from almost no consumption at all, to a $2.25 billion market annually.

How would you answer the question:

What’s the “California Roll” for topic maps?

Addicted: An Industry Matures / Hooked: How to Build Habit-Forming Products

Friday, June 19th, 2015

Addicted: An Industry Matures by Ted McCarthy.

From the post:

Perhaps nothing better defines our current age than to say it is one of rapid technological change. Technological improvements will continue to provide more to individuals and society, but also to demand more: demand (and leak) more of our data, more time, more attention and more anxieties. While an increasingly vocal minority have begun to rail against certain of these demands, through calls to pull our heads away from our screens and for corporations and governments to stop mining user data, a great many in the tech industry see no reason to change course. User data and time are requisite in the new business ecosystem of the Internet; they are the fuel that feeds the furnace.

Among those advocating for more fuel is Nir Eyal and his recent work, Hooked: How to Build Habit-Forming Products. The book — and its accompanying talk — has attracted a great deal of attention here in the Bay Area, and it’s been overwhelmingly positive. Eyal outlines steps that readers — primarily technology designers and product managers — can follow to make ‘habit-forming products.’ Follow his prescribed steps, and rampant entrepreneurial success may soon be yours.

Since first seeing Eyal speak at Yelp’s San Francisco headquarters last fall, I’ve heard three different clients in as many industries refer to his ideas as “amazing,” and some have hosted reading groups to discuss them. His book has launched to Amazon’s #1 bestseller spot in Product Management, and hovers near the same in Industrial & Product Design and Applied Psychology. It is poised to crack into the top 1000 sellers across the entire site, and reviewers have offered zealous praise: Eric Ries, a Very Important tech Person indeed, has declared the book “A must read for everyone who cares about driving customer engagement.”

And yet, no one offering these reviews has pointed what should be obvious: that Eyal’s model for “hooking” users is nearly identical to that used by casinos to “hook” their own; that such a model engenders behavioral addictions in users that can be incredibly difficult to overcome. Casinos may take our money, but these products can devour our time; and while we’re all very aware of what the casino owners are up to, technology product development thus far has managed to maintain an air of innocence.

While it may be tempting to dismiss a book seemingly written only for, and read only by, a small niche of $12 cold pressed juice-drinking, hoodie and flip flop-wearing techies out on the west coast, one should consider the ways in which those techies are increasingly creating the worlds we all inhabit. Technology products are increasingly determining the news we read, the letters we send, the lovers we meet and the food we eat — and their designers are reading this book, and taking note. I should know: I’m one of them.

I start with Ted McCarthy’s introduction because I found out about Hooked: How to Build Habit-Forming Products by Nir Eyal. It certainly sounded like a book that I must read!

I was hoping to find reviews sans moral hand-wringing but even Hooked: How To Make Habit-Forming Products, And When To Stop Flapping by Wing Kosner gets in on the moral concern act:

In the sixth chapter of the book, Eyal discusses these manipulations, but I think he skirts around the morality issues as well as the economics that make companies overlook them. The Candy Crush Saga game is a good example of how his formulation fails to capture all the moral nuance of the problem. According to his Manipulation Matrix, King, the maker of Candy Crush Saga, is an Entertainer because although their product does not (materially) improve the user’s life, the makers of the game would happily use it themselves. So, really, how bad can it be?

Consider this: Candy Crush is a very habit-forming time-waster for the majority of its users, but a soul-destroying addiction for a distinct minority (perhaps larger, however, than the 1% Eyal refers to as a rule of thumb for user addiction.) The makers of the game may be immune to the game’s addictive potential, so their use of it doesn’t necessarily constitute a guarantee of innocuousness. But here’s the economic aspect: because consumers are unwilling to pay for casual games, the makers of these games must construct manipulative habits that make players seek rewards that are most easily attained through in-app purchases. For “normal” players, these payments may just be the way that they pay to play the game instead of a flat rate up-front or a subscription, and there is nothing morally wrong with getting paid for your product (obviously!) But for “addicted” players these payments may be completely out of scale with any estimate of the value of a casual game experience. King reportedly makes almost $1 million A DAY from Candy Crush, all from in app purchases. My guess is that there is a long tail going on with a relative few players being responsible for a disproportionate share of that revenue.

This is in Forbes.

I don’t read Forbes for moral advice. 😉 I don’t consult technologists either. For moral advice, consult your local rabbi, priest or iman.

Here is an annotated introduction to Hooked, if you want to get a taste of what awaits before ordering the book. If you visit the book’s website, you will be offered a free Hooked workbook. And you can follow Nir Eyal on Twitter: @nireyal. Whatever else can be said about Nir Eyal, he is a persistent marketeer!)

Before you become overly concerned about the moral impact of Hooked, recall that legions of marketeers have labored for generations to produce truly addictive products, some with “added ingredients” and others, more recently, not. Creating additive products isn’t as easy as “read the book” and the rest of us will start wearing bras on our heads. (Apologies to Scott Adams and especially to Dogbert.)

Implying that you can make all of us into addictive product mavens, however, is a marketing hook that few of us can resist.

Enjoy!

Market Research

Saturday, May 30th, 2015

The products most Googled in every country of the world in one crazy map by Drake Baer.

If you are looking to successfully market goods or services, its helpful to know what they are interested in buying.

Some of the products are quite surprising:

Mauritania: Slaves.

Japan: Watermelon.

Russia: Fly a MIG.

How do you do your market research?

I first saw this in a post to Facebook by Jamie Clark.

Big Data Leaves Money On The Table

Sunday, March 29th, 2015

Big data hype reminds me of the “He’s Large” song from Popeye.

The recurrent theme is that whatever his other qualities, Bluto is large.

I mention that because Anthony Smith illustrates in When it Comes to Data, Small is the New Big, big data is great, but it never tells the whole story.

The whole story includes how and why customers buy and use your product. Trivial things like that.

Don’t use big data like the NSA uses phone data:

There is no other way we know of to connect the dots NSA & Connecting the Dots

Big data can show a return on your investment but it will only show you some of the facts that are available.

Don’t allow a fixation on “big data” blind you to the value of small data, which isn’t available to big data approaches and tools.

PS: The NSA uses phone data as churn for the sake their budget. Churn of big data doesn’t add to your bottom line.

Building A Digital Future

Friday, March 13th, 2015

You may have missed BBC gives children mini-computers in Make it Digital scheme by Jane Wakefield.

From the post:

One million Micro Bits – a stripped-down computer similar to a Raspberry Pi – will be given to all pupils starting secondary school in the autumn term.

The BBC is also launching a season of coding-based programmes and activities.

It will include a new drama based on Grand Theft Auto and a documentary on Bletchley Park.

Digital visionaries

The initiative is part of a wider push to increase digital skills among young people and help to fill the digital skills gap.

The UK is facing a significant skills shortage, with 1.4 million “digital professionals” estimated to be needed over the next five years.

The BBC is joining a range of organisations including Microsoft, BT, Google, Code Club, TeenTech and Young Rewired State to address the shortfall.

At the launch of the Make it Digital initiative in London, director-general Tony Hall explained why the BBC was getting involved.

Isn’t that clever?

Odd that I haven’t heard about a similar effort in the United States.

There are only 15 million (14.6 million actually) secondary students this year in the United States and at $35 per Raspberry Pi, that’s only $525,000,000. That may sound like a lot but remember that the 2015 budget request for the Department of Homeland security is $38.2 Billion (yes, with a B). We are spending 64 times the amount needed to buy every secondary student in the United States a Raspberry Pi on DHS. A department that has yet to catch a single terrorist.

There would be consequences to buying every secondary student in the United States a Raspberry Pi:

  • Manufacturers of Raspberry Pi would have a revenue stream for more improvements
  • A vast secondary markets for add-ons for Raspberry Pi computers would be born
  • An even larger market for tutors and classes on Raspberry Pi would jump start
  • Millions of secondary students would be taking positive steps towards digital literacy

The only real drawback that I foresee is that the usual suspects would not be at the budget trough.

Maybe, just this once, the importance of digital literacy and inspiring a new generation of CS researchers is worth taking that hit.

Any school districts distributing Raspberry Pis on their own to set an example for the feds?

PS: I would avoid getting drawn into “accountability” debates. Some students will profit from them, some won’t. The important aspect is development of an ongoing principle of digital literacy and supporting it. Not every child reads books from the library but every community is poorer for the lack of a well supported library.

I first saw this in a tweet by Bart Hannsens.

Selling Big Data to Big Oil

Wednesday, March 11th, 2015

Oil firms are swimming in data they don’t use by Tom DiChristopher.

From the post:

McKinsey & Company wanted to know how much of the data gathered by sensors on offshore oil rigs is used in decision-making by the energy industry. The answer, it turns out, is not much at all.

After studying sensors on rigs around the world, the management consulting firm found that less than 1 percent of the information gathered from about 30,000 separate data points was being made available to the people in the industry who make decisions.
Technology that can deliver data on virtually every aspect of drilling, production and rig maintenance has spread throughout the industry. But the capability—or, in some cases, the desire—to process that data has spread nowhere near as quickly. As a result, drillers are almost certainly operating below peak performance—leaving money on the table, experts said.

Drilling more efficiently could also help companies achieve the holy grail—reducing the break-even cost of producing a barrel of oil, said Kirk Coburn, founder and managing director at Surge Ventures, a Houston-based energy technology investment firm.

Separately, a report by global business consulting firm Bain & Co. estimated that better data analysis could help oil and gas companies boost production by 6 to 8 percent. The use of so-called analytics has become commonplace in other industries from banking and airlines to telecommunications and manufacturing, but energy firms continue to lag.

Great article although Tom does seem to assume that better data analysis will automatically lead to better results. It can but I would rather under promise and over deliver, particularly in a industry without a lot of confidence in the services being offered.

Machine learning and magic [ Or, Big Data and magic]

Monday, March 9th, 2015

Machine learning and magic by John D. Cook.

From the post:

When I first heard about a lie detector as a child, I was puzzled. How could a machine detect lies? If it could, why couldn’t you use it to predict the future? For example, you could say “IBM stock will go up tomorrow” and let the machine tell you whether you’re lying.

Of course lie detectors can’t tell whether someone is lying. They can only tell whether someone is exhibiting physiological behavior believed to be associated with lying. How well the latter predicts the former is a matter of debate.

I saw a presentation of a machine learning package the other day. Some of the questions implied that the audience had a magical understanding of machine learning, as if an algorithm could extract answers from data that do not contain the answer. The software simply searches for patterns in data by seeing how well various possible patterns fit, but there may be no pattern to be found. Machine learning algorithms cannot generate information that isn’t there any more than a polygraph machine can predict the future.

I supplied the alternative title because of the advocacy of “big data” as a necessity for all enterprises, with no knowledge at all of the data being collected or of the issues for a particular enterprise that it might address. Machine learning suffers from the same affliction.

Specific case studies don’t answer the question of whether machine learning and/or big data is a fit for your enterprise or its particular problems. Some problems are quite common but incompetency in management is the most prevalent of all (Dilbert) and neither big data nor machine learning than help with that problem.

Take John’s caution to heart for both machine learning and big data. You will be glad you did!

Big Data as statistical masturbation

Tuesday, February 10th, 2015

Big Data as statistical masturbation by Rick Searle.

From the post:

It’s just possible that there is a looming crisis in yet another technological sector whose proponents have leaped too far ahead, and too soon, promising all kinds of things they are unable to deliver. It strange how we keep ramming our head into this same damned wall, but this next crisis is perhaps more important than deflated hype at other times, say our over optimism about the timeline for human space flight in the 1970’s, or the “AI winter” in the 1980’s, or the miracles that seemed just at our fingertips when we cracked the Human Genome while pulling riches out of the air during the dotcom boom- both of which brought us to a state of mania in the 1990’s and early 2000’s.

searle

The thing that separates a potentially new crisis in the area of so-called “Big-Data” from these earlier ones is that, literally overnight, we have reconstructed much of our economy, national security infrastructure and in the process of eroding our ancient right privacy on it’s yet to be proven premises. Now, we are on the verge of changing not just the nature of the science upon which we all depend, but nearly every other field of human intellectual endeavor. And we’ve done and are doing this despite the fact that the the most over the top promises of Big Data are about as epistemologically grounded as divining the future by looking at goat entrails.

Well, that might be a little unfair. Big Data is helpful, but the question is helpful for what? A tool, as opposed to a supposedly magical talisman has its limits, and understanding those limits should lead not to our jettisoning the tool of large scale data based analysis, but what needs to be done to make these new capacities actually useful rather than, like all forms of divination, comforting us with the idea that we can know the future and thus somehow exert control over it, when in reality both our foresight and our powers are much more limited.

Start with the issue of the digital economy. One model underlies most of the major Internet giants- Google, FaceBook and to a lesser extent Apple and Amazon, along with a whole set of behemoths who few of us can name but that underlie everything we do online, especially data aggregators such as Axicom. That model is to essentially gather up every last digital record we leave behind, many of them gained in exchange for “free” services and using this living archive to target advertisements at us.

It’s not only that this model has provided the infrastructure for an unprecedented violation of privacy by the security state (more on which below) it’s that there’s no real evidence that it even works.

Ouch! I wonder if Searle means “works” as in satisfies a business goal or objective? Not just work in the sense it doesn’t crash?

That would go a long way to explain the failure of the original Semantic Web vision despite the investment of $billions in its promotion. With the lack of a “works” for some business goal or objective, who cares if it “works” in some other sense?

You need to read Serle in full but one more tidbit to tempt you into doing so:


Here’s the problem with this line of reasoning, a problem that I think is the same, and shares the same solution to the issue of mass surveillance by the NSA and other security agencies. It begins with this idea that “the landscape will become apparent and patterns will naturally emerge.”

The flaw that this reasoning suffers has to do with the way very large data sets work. One would think that the fact that sampling millions of people, which we’re now able to do via ubiquitous monitoring, would offer enormous gains over the way we used to be confined to population samples of only a few thousand, yet this isn’t necessarily the case. The problem is the larger your sample size the greater your chance at false correlations.

Searle does cite Stefan Thurner, which we talked about in Newly Discovered Networks among Different Diseases…, who makes the case that any patterns you discover with big data are the starting point for research, not conclusions to be drawn from big data. Not the same thing.

PS: I do concede that Searle overlooks the non-healthy and incestuous masturbation among and between business management, management consultancies, vendors, and others with regard to big data. Quick or easy answers are never quick, easy, or even satisfying.

I first saw this in a post by Kirk Borne.