Archive for the ‘Advertising’ Category

#7 Believing that information leads to action (Myth of Liberals)

Monday, February 26th, 2018

Top 10 Mistakes in Behavior Change

Slides from Stanford University’s Persuasive Tech Lab,

A great resource whether you are promoting a product, service or trying to “interfere” with an already purchased election.

I have a special fondness for mistake #7 on the slides:

Believing that information leads to action

If you want to lose the 2018 mid-terms or even worse, the presidential election in 2020, you keep believing in “educating” voters.

Ping me if you want to be a winning liberal.

If You Can’t See The Data, The Statistics Are False

Saturday, June 10th, 2017

The headline, If You Can’t See The Data, The Statistics Are False is my one line summary of 73.6% of all Statistics are Made Up – How to Interpret Analyst Reports by Mark Suster.

You should read Suster’s post in full, if for no other reason that his accounts of how statistics are created, that’s right, created, for reports:

But all of the data projections were so different so I decided to call some of the research companies and ask how they derived their data. I got the analyst who wrote one of the reports on the phone and asked how he got his projections. He must have been about 24. He said, literally, I sh*t you not, “well, my report was due and I didn’t have much time. My boss told me to look at the growth rate average over the past 3 years an increase it by 2% because mobile penetration is increasing.” There you go. As scientific as that.

I called another agency. They were more scientific. They had interviewed telecom operators, handset manufacturers and corporate buyers. They had come up with a CAGR (compounded annual growth rate) that was 3% higher that the other report, which in a few years makes a huge difference. I grilled the analyst a bit. I said, “So you interviewed the people to get a plausible story line and then just did a simple estimation of the numbers going forward?”

“Yes. Pretty much”

Write down the name of your favorite business magazine.

How many stories have you enjoyed over the past six months with “scientific” statistics like those?

Suster has five common tips for being a more informed consumer of data. All of which require effort on your part.

I have only one, which requires only reading on your part:

Can you see the data for the statistic? By that I mean is the original data, its collection method, who collected it, method of collection, when it was collected, etc., available to the reader?

If not, the statistic is either false or inflated.

The test I suggest is applicable at the point where you encounter the statistic. It puts the burden on the author who wants their statistic to be credited, to empower the user to evaluate their statistic.

Imagine the data analyst story where the growth rate statistic had this footnote:

1. Averaged growth rate over past three (3) years and added 2% at direction of management.

It reports the same statistic but also warns the reader the result is a management fantasy. Might be right, might be wrong.

Patronize publications with statistics + underlying data. Authors and publishers will get the idea soon enough.

Addictive Technology (And the Problem Is?)

Thursday, May 4th, 2017

Tech Companies are Addicting People! But Should They Stop? by Nir Eyal.

From the post:

To understand technology addiction (or any addiction for that matter) you need to understand the Q-tip. Perhaps you’ve never noticed there’s a scary warning on every box of cotton swabs that reads, “CAUTION: Do not enter ear canal…Entering the ear canal could cause injury.” How is it that the one thing most people do with Q-tips is the thing manufacturers explicitly warn them not to do?

“A day doesn’t go by that I don’t see people come in with Q-tip-related injuries,” laments Jennifer Derebery, an inner ear specialist in Los Angeles and the past president of the American Academy of Otolaryngology. “I tell my husband we ought to buy stock in the Q-tips company; it supports my practice.” It’s not just that people do damage to their ears with Q-tips, it’s that they keep doing damage. Some even call it an addiction.

On one online forum, a user asks, “Anyone else addicted to cleaning their ears with Q-tips?…I swear to God if I go more than a week without sticking Q-tips in my ears, I go nuts. It’s just so damn addicting…” Elsewhere, another ear-canal enterer also associates ear swabbing with dependency: “How can I detox from my Q-tips addiction?” The phenomenon is so well known that MADtv based a classic sketch on a daughter having to hide Q-tip use from her parents like a junkie.

Q-tip addiction shares something in common with other, more prevalent addictions like gambling, heroin, and even Facebook use. Understanding what I call, the Q-tip Effect, raises important questions about products we use every day, and the responsibilities their makers have in relation to the welfare of their users.
… (emphasis in original)

It’s a great post on addiction (read the definition), technology, etc., but Nir loses me here:

However, there’s a difference between accepting the unavoidable edge cases among unknown users and knowingly promoting the Q-tip Effect. When it comes to companies that know exactly who’s using, how, and how much, much more can be done. To do the right thing by their customers, companies have an obligation to help when they know someone wants to stop, but can’t. Silicon Valley technology companies are particularly negligent by this ethical measure.

The only basis for this “…obligation to help when they know someone wants to stop, but can’t” appears to be Nir’s personal opinion.

That’s ok and he is certainly entitled to it, but Nir hasn’t offered to pay the cost of meeting his projected ethical obligation.

People enjoy projecting ethical obligations on others, from the anti-abortion, anti-birth control, anti-drugs, etc.

Imposing moral obligations that others pay for is more popular in the U.S. than adultery. I don’t have any hard numbers on that last point. Let’s say imposing moral obligations paid for by others is wildly popular and leave it at that.

If I had a highly addictive (in Nir’s sense) app, I would be using the profits to rent backhoes for anyone who needed one along the DAPL pipeline. No questions asked.

It’s an absolute necessity to raise ethical questions about technology and society in general.

But my first question is always: Who pays the cost of your ethical concern?

If it’s not you, that says a lot to me about your concern.

Power to the User! + Pull Advertising

Friday, April 14th, 2017

Princeton’s Ad-Blocking Superweapon May Put an End to the Ad-Blocking Arms Race by Jason Koebler.

From the post:

An ad blocker that uses computer vision appears to be the most powerful ever devised and can evade all known anti ad blockers.

A team of Princeton and Stanford University researchers has fundamentally reinvented how ad-blocking works, in an attempt to put an end to the advertising versus ad-blocking arms race. The ad blocker they’ve created is lightweight, evaded anti ad-blocking scripts on 50 out of the 50 websites it was tested on, and can block Facebook ads that were previously unblockable.

The software, devised by Arvind Narayanan, Dillon Reisman, Jonathan Mayer, and Grant Storey, is novel in two major ways: First, it looks at the struggle between advertising and ad blockers as fundamentally a security problem that can be fought in much the same way antivirus programs attempt to block malware, using techniques borrowed from rootkits and built-in web browser customizability to stealthily block ads without being detected. Second, the team notes that there are regulations and laws on the books that give a fundamental advantage to consumers that cannot be easily changed, opening the door to a long-term ad-blocking solution.
… (emphasis in original)

How very cool! Putting users in charge of the content they view. What a radical idea!

Koebler does the required genuflection towards the “ethics” of blocking ads, but I see no “ethical” issue at all.

IBM, Cisco, etc., are wasting their time and mine advertising enterprise scale security solutions to me. Promise.

What’s broken is that advertisers, like telephone scammers, must contact millions of people to find those unlucky enough to answer the ad and/or phone.

What if instead of a push advertising model we had pull advertising?

For example, not this year but in a few years, I’m going to buy a new car. When that time comes, ads and offers on cars of certain types would be welcome.

What if I could specify a time period, price range, model of car and for that relevant period of time, I get card ads, etc. Notice I have pre-qualified myself as interested, so the advertisers aren’t talking about hits out of millions but possibly thousands if not hundreds. Depends on how good their offers are.

Or if generally I’m interested in books in particular categories or by particular authors? Or when cheese is on sale at Kroger? All of which I could pre-qualify myself.

Pull advertising reduces the bandwidth wasted by advertisers who push content never knowing where a mark (sorry, customer) may be found.

Such a system would need to protect the privacy of consumers, so they would not be pestered when they had not opted in for ads. But anonymous ad brokerage is certainly doable. (The opposite of finding a subject with topic maps is concealing it.)

Interested in ending web-based spam/click-bait?

Preserving Ad Revenue With Filtering (Hate As Renewal Resource)

Monday, November 21st, 2016

Facebook and Twitter haven’t implemented robust and shareable filters for their respective content streams for fear of disturbing their ad revenue streams.* The power to filter feared as the power to exclude ads.

Other possible explanations include: Drone employment, old/new friends hired to discuss censoring content; Hubris, wanting to decide what is “best” for others to see and read; NIH (not invented here), which explains silence concerning my proposals for shareable content filters; others?

* Lest I be accused of spreading “fake news,” my explanation for the lack of robust and shareable filters on content on Facebook and Twitter is based solely on my analysis of their behavior and not any inside leaks, etc.

I have a solution for fearing filters as interfering with ad revenue.

All Facebook posts and Twitter tweets, will be delivered with an additional Boolean field, ad, which defaults to true (empty field), meaning the content can be filtered. (following Clojure) When the field is false, that content cannot be filtered.

Filters being registered and shared via Facebook and Twitter, testing those filters for proper operation (and not applying them if they filter ad content) is purely an algorithmic process.

Users pay to post ad content, a step where the false flag can be entered, resulting in no more ad freeloaders being free from filters.

What’s my interest? I’m interested in the creation of commercial filters for aggregation, exclusion and creating a value-add product based on information streams. Moreover, ending futile and bigoted attempts at censorship seems like a worthwhile goal to me.

The revenue potential for filters is nearly unlimited.

The number of people who hate rivals the number who want to filter the content seen by others. An unrestrained Facebook/Twitter will attract more hate and “fake news,” which in turn will drive a great need for filters.

Not a virtuous cycle but certainly a profitable one. Think of hate and the desire to censor as renewable resources powering that cycle.

PS: I’m not an advocate for hate and censorship but they are both quite common. Marketing is based on consumers as you find them, not as you wish they were.

Bias For Sale: How Much and What Direction Do You Want?

Tuesday, March 29th, 2016

Epstein and Robertson pitch it a little differently but that is the bottom line of: The search engine manipulation effect (SEME) and its possible impact on the outcomes of elections.


Internet search rankings have a significant impact on consumer choices, mainly because users trust and choose higher-ranked results more than lower-ranked results. Given the apparent power of search rankings, we asked whether they could be manipulated to alter the preferences of undecided voters in democratic elections. Here we report the results of five relevant double-blind, randomized controlled experiments, using a total of 4,556 undecided voters representing diverse demographic characteristics of the voting populations of the United States and India. The fifth experiment is especially notable in that it was conducted with eligible voters throughout India in the midst of India’s 2014 Lok Sabha elections just before the final votes were cast. The results of these experiments demonstrate that (i) biased search rankings can shift the voting preferences of undecided voters by 20% or more, (ii) the shift can be much higher in some demographic groups, and (iii) search ranking bias can be masked so that people show no awareness of the manipulation. We call this type of influence, which might be applicable to a variety of attitudes and beliefs, the search engine manipulation effect. Given that many elections are won by small margins, our results suggest that a search engine company has the power to influence the results of a substantial number of elections with impunity. The impact of such manipulations would be especially large in countries dominated by a single search engine company.

I’m not surprised by SEME (search engine manipulation effect).

Although I would probably be more neutral and say: Search Engine Impact on Voting.

Whether you consider one result or another as the result of “manipulation” is a matter of perspective. No search engine strives to delivery “false” information to users.

Gary Anthes in Search Engine Agendas, Communications of the ACM, Vol. 59 No. 4, pages 19-21, writes:

In the novel 1984, George Orwell imagines a society in which powerful but hidden forces subtly shape peoples’ perceptions of the truth. By changing words, the emphases put on them, and their presentation, the state is able to alter citizens’ beliefs and behaviors in ways of which they are unaware.

Now imagine today’s Internet search engines did just that kind of thing—that subtle biases in search engine results, introduced deliberately or accidentally, could tip elections unfairly toward one candidate or another, all without the knowledge of voters.

That may seem an unlikely scenario, but recent research suggests it is quite possible. Robert Epstein and Ronald E. Robertson, researchers at the American Institute for Behavioral Research and Technology, conducted experiments that showed the sequence of results from politically oriented search queries can affect how users vote, especially among undecided voters, and biased rankings of search results usually go undetected by users. The outcomes of close elections could result from the deliberate tweaking of search algorithms by search engine companies, and such manipulation would be extremely difficult to detect, the experiments suggest.

Gary’s post is a good supplement to the original article, covering some of the volunteers who are ready to defend the rest of us from biased search results.

Or as I would put it, to inject their biases into search results as opposed to other biases they perceive as being present.

If you are more comfortable describing the search results you want presented as “fair and equitable,” etc., please do so but I prefer the honesty of naming biases as such.

Or as David Bowie once said:

Make your desired bias, direction, etc., a requirement and allow data scientists to get about the business of conveying it.

Certainly what “ethical” data scientists are doing at Google as they conspire with the US government and others to overthrow governments, play censor to fight “terrorists,” and undertake other questionable activities.

I object to some of Google’s current biases because I would have them be biased in a different direction.

Let’s sell your bias/perspective to users with a close eye on the bright line of the law.


Does Not Advertise With Google? (Rigging Search Results)

Monday, February 8th, 2016

I ask about because when I search on Google with the string:

honest society member

I get 82,100,000 “hits” and the first page is entirely, honor society stuff.

No, “did you mean,” or “displaying results for…”, etc.

Not a one.

Top of the second page of results did have a webpage that mentions, but not their home site.

I can’t recall seeing an Honestsociety ad with Google and thought perhaps one of you might.

Lacking such ads, my seat of the pants explanation for “honest society member” returning the non-responsive “honor society” listing isn’t very generous.

What anomalies have you observed in Google (or other) search results?

What searches would you use to test ranking in search results by advertiser with Google versus non-advertiser with Google?

Rigging Searches

For my part, it isn’t a question of whether search results are rigged or not, but rather are they rigged the way I or my client prefers?

Or to say it in a positive way: All searches are rigged. If you think otherwise, you haven’t thought very deeply about the problem.

Take library searches for example. Do you think they are “fair” in some sense of the word?

Hmmm, would you agree that the collection practices of a library will give a user an impression of the literature on a subject?

So the search itself isn’t “rigged,” but the data underlying the results certainly influences the outcome.

If you let me pick the data, I can guarantee whatever search result you want to present. Ditto for the search algorithms.

The best we can do is make our choices with regard to the data and algorithms explicit, so that others accept our “rigged” data or choose to “rig” it differently.

Is the term “tease” still in fashion?

Thursday, October 1st, 2015

I ask if “tease” is still in fashion (or its more sexist equivalent) because I keep running across partial O’Reilly publications that are touted as “free,” but are in reality, just extended ads for forthcoming books.

A case in point is “Transforms in CSS” which isn’t really a book but an excerpt from the forth edition of CSS: The Definitive Guide.

Forty page book?

Social media with light up with posts and reposts about this “free” title.

Save your time and disk space. If anything, get a preview copy of the forth edition of CSS: The Definitive Guide when it is available.

Make no mistake, I like O’Reilly publications and I am presently reading what I suspect is the best O’Reilly title in a number of years, XQuery by Priscilla Walmsley.

O’Reilly shouldn’t waste bandwidth with disconnected excerpts for its titles.

What is Walmart Doing Right and Topic Maps Doing Wrong?

Sunday, November 30th, 2014

Sentences to ponder by Chris Blattman.

From the post:

Walmart reported brisk traffic overnight. The retailer, based in Bentonville, Ark., said that 22 million shoppers streamed through stores across the country on Thanksgiving Day. That is more than the number of people who visit Disney’s Magic Kingdom in an entire year.

A blog at the Wall Street Journal suggests the numbers are even better than those reported by Chris:

Wal-Mart said it had more than 22 million customers at its stores between 6 p.m. and 10 p.m. Thursday, similar to its numbers a year ago.

In four (4) hours WalMart has more customers than visit Disney’s Magic Kingdom in a year.

Granting as of October 31, 2014, WalMart has forty-nine hundred and eighty-seven (4987) locations in the United States, that remains an impressive number.

Suffice it to say the number of people actively using topic maps is substantially less than the Thankgiving customer numbers for Walmart.

I don’t have the answer to the title question.

Asking you to ponder it as you do holiday shopping.

What is different about your experience in online or offline shopping that makes it different from your experience with topic maps? Or pre- or post-shopping experience that is different?

I will take this question up again after the first of 2015 so be working on your thoughts and suggestions over the holiday season.


The structural virality of online diffusion

Saturday, November 22nd, 2014

The structural virality of online di ffusion by Sharad Goel, Ashton Anderson, Jake Hofman, and Duncan J. Watts.

Viral products and ideas are intuitively understood to grow through a person-to-person di ffusion process analogous to the spread of an infectious disease; however, until recently it has been prohibitively difficult to directly observe purportedly viral events, and thus to rigorously quantify or characterize their structural properties. Here we propose a formal measure of what we label “structural virality” that interpolates between two conceptual extremes: content that gains its popularity through a single, large broadcast, and that which grows through multiple generations with any one individual directly responsible for only a fraction of the total adoption. We use this notion of structural virality to analyze a unique dataset of a billion di ffusion events on Twitter, including the propagation of news stories, videos, images, and petitions. We find that across all domains and all sizes of events, online di ffusion is characterized by surprising structural diversity. Popular events, that is, regularly grow via both broadcast and viral mechanisms, as well as essentially all conceivable combinations of the two. Correspondingly, we find that the correlation between the size of an event and its structural virality is surprisingly low, meaning that knowing how popular a piece of content is tells one little about how it spread. Finally, we attempt to replicate these fi ndings with a model of contagion characterized by a low infection rate spreading on a scale-free network. We fi nd that while several of our empirical fi ndings are consistent with such a model, it does not replicate the observed diversity of structural virality.

Before you get too excited, the authors do not provide a how-to-go-viral manual.

In part because:

Large and potentially viral cascades are therefore necessarily very rare events; hence one must observe a correspondingly large number of events in order to fi nd just one popular example, and many times that number to observe many such events. As we will describe later, in fact, even moderately popular events occur in our data at a rate of only about one in a thousand, while “viral hits” appear at a rate closer to one in a million. Consequently, in order to obtain a representative sample of a few hundred viral hits arguably just large enough to estimate statistical patterns reliably one requires an initial sample on the order of a billion events, an extraordinary data requirement that is difficult to satisfy even with contemporary data sources.

The authors clearly advance the state of research on “viral hits” and conclude with suggestions for future modeling work.

You can imagine the reaction of marketing departments should anyone get closer to designing successful viral advertising.

A good illustration that something we can observe, “viral hits,” in an environment where the spread can be tracked (Twitter), can still resist our best efforts to model and/or explain how to repeat the “viral hit” on command.

A good story to remember when a client claims that some action is transparent. It may well be, but that doesn’t mean there are enough instances to draw any useful conclusions.

I first saw this in a tweet by Steven Strogatz.

Banksy on Advertising

Saturday, May 17th, 2014

Banksy on Advertising

Gauge your own tolerance for risk before following Banksy.

I think the RIAA and others win because our individual toleration for risk is so low.

We want to protest, take chances, etc., but you know, we might offend a potential future employer or one of their clients or some government wonk.

So long as a majority of us feel that way, the revolution is going to be delayed.

That suggests a solution to me.


PS: Don’t take my RIAA example the wrong way. I think artists and others who contribute to the creative process should be compensated. The record industry with its executives and sycophants, etc., not so much. Music thrives in spite of the recording industry, not because of it.

How Gmail Onboards New Users

Saturday, March 15th, 2014

How Gmail Onboards New Users

From the post:

After passing Hotmail in 2012 as the world’s #1 email service with a sorta-impressive 425 million users(!), it can only be assumed that they’ve grown in the years since. Wanna see how it’s done in Gmail town?

A great set of sixty-nine (69) slides that point out how GMail has treated new users.

In a short phrase: Better than anyone else. (full stop)

You may have the “best” solution or the lower cost solution or whatever. If you don’t get users to stay long enough to realize that, well, you will soon be doing something else.

GMail’s approach won’t work as a cookie-cutter design for you but lessons can be adapted.

I first saw this in a tweet by Fabio Catapano

The 25 Biggest Brand Fails of 2013

Wednesday, December 25th, 2013

The 25 Biggest Brand Fails of 2013 by Tim Nudd.

From the post:

Arrogant, intolerant, sexist, disgusting, cheesy, tasteless, just plain stupid. Brand fails come in all kinds of off-putting shapes and sizes, though one thing remains constant—the guilty adrenaline rush of ad-enfreude that onlookers feel while watching brands implode for everyone to see.

We’ve collected some of the most delectably embarrassing marketing moments from 2013 for your rubbernecking pleasure. Eat it up, you heartless pigs. And just be thankful it wasn’t you who screwed up this royally.

Amusing but also lessons in what not to do when advertising topic maps.

Another approach that doesn’t work is “…why isn’t everybody migrating to technology X? It’s so great….”

I kid you not. The video seemed to go on forever.

The video missed what many of the 25 ads missed, it’s a two part test for effective advertising:

  1. What’s in it for the customer?
  2. Is the “what” something the customer cares about?

If you miss either one of those points, the ad is a dud even if it doesn’t make the top 25 poorest ads next year.

Casualty Count for Obamacare (0)

Wednesday, November 20th, 2013

5 lessons IT leaders can learn from Obamacare rollout mistakes by Teena Hammond.

Teena reports on five lessons to be learned from the rollout:

  1. If you’re going to launch a new website, decide whether to use in-house talent or outsource. If you opt to outsource, hire a good contractor.
  2. Follow the right steps to hire the best vendor for the project, and properly manage the relationship.
  3. Have one person in charge of the project with absolute veto power.
  4. Do not gloss over any problems along the way. Be open and honest about the progress of the project. And test the site.
  5. Be ready for success or failure. Hope for the best but prepare for the worst and have guidelines to manage any potential failure.

There is a sixth lesson that emerges from Vaughn Bullard, CEO and founder of Build.Automate Inc., who is quoted in part saying:

The contractor telling the government that it was ready despite the obvious major flaws in the system is just baffling to me. If I had an employee that did something similar, I would have terminated their employment. It’s pretty simple.”

What it comes down to in the end, Bullard said, is that, “Quality and integrity count in all things.”

To avoid repeated failures in the future (sixth lesson), terminate those responsible for the current failure.

All contractors and their staffs. Track the staffs in order to avoid the same staff moving to other contractors.

Termination all appointed or hired staff who responsible for the contract and/or management of the project.

Track former staff employment by contractors and refuse contracts wherever they are employed.

You may have noticed that the reported casualty count for the Obamacare failure has been zero.

What incentive exists for the next group of contract/project managers and/or contractors for “quality and integrity?”

That would be the same as the casualty count, zero.

PS: Before you protest the termination and ban of failures as cruel, consider its advantages as a wealth redistribution program.

The government may not get better service but it will provide opportunities for fraud and poor quality work from new participants.

Not to mention there are IT service providers who exhibit quality and integrity. Absent traditional mis-management, the government could happen upon one of those.

The tip for semantic technologies is to under-promise and over-deliver. Always.

HyperDex 1.0RC5

Wednesday, November 20th, 2013

HyperDex 1.0RC5 by Robert Escriva.

From the post:

We are proud to announce HyperDex 1.0.rc5, the next generation NoSQL data store that provides ACID transactions, fault-tolerance, and high-performance. This new release has a number of exciting features:

  • Improved cluster management. The cluster will automatically grow as new nodes are added.
  • Backup support. Take backups of the coordinator and daemons in a consistent state and be able to restore the cluster to the point when the backup was taken.
  • An admin library which exposes performance counters for tracking cluster-wide statistics relating to HyperDex
  • Support for HyperLevelDB. This is the first HyperDex release to use HyperLevelDB, which brings higher performance than Google’s LevelDB.
  • Secondary indices. Secondary indices improve the speed of search without the overhead of creating a subspace for the indexed attributes.
  • New atomic operations. Most key-based operations now have conditional atomic equivalents.
  • Improved coordinator stability. This release introduces an improved coordinator that fixes a few stability problems reported by users.

Binary packages for Debian 7, Ubuntu 12.04-13.10, Fedora 18-19, and CentOS 6 are available on the HyperDex Download page, as well as source tarballs for other Linux platforms.

BTW, HyperDex has a cool logo:


Good logos are like good book covers, they catch the eye of potential customers.

A book sale starts when a customer pick a book up, hence the need for a good cover.

What sort of cover does your favorite semantic application have?

How videos go viral on Twitter – Three stories

Monday, August 12th, 2013

How videos go viral on Twitter – Three stories by Gordon MacMillan.

From the post:

What is it that makes videos go viral? It is one of the big questions in digital marketing. While there is no single magic formula, we’ve come up with some key insights after tracking the stories behind three recent viral videos.

  1. Twitter users love video
  2. Videos are easily shareable
  3. Promoted products amplify your reach
  4. Get creative with Vine

See Gordon’s post for the details. Although I warn you up front that there is no special sauce that makes a video go viral.

What would you show about topic maps in six seconds?

Topic Maps Logo?

Sunday, April 28th, 2013

While writing about Drake, I was struck by the attractiveness of the project logo:

Drake logo

So I decided to look at some other projects logos, just to get some ideas on what other projects were doing as far as logos:

Hadoop logo

Mahout logo

Chukwa logo

But the most famous project at Apache has the simplest logo of all:

HTTPD logo

To be truthful, when someone says web server, I automatically think of the Apache server. Others exist and new ones are invented, but Apache server is nearly synonymous with web server.

Perhaps the lesson is the logo did not make it so.

Has anyone written a history of the Apache web server?

A cross between a social history and a technical one, that illustrates how the project responded to user demands and and requirements. That could make a very nice blueprint for other projects to follow.

Poorly Researched Infographics [Adaptation for Topic Maps?]

Tuesday, January 15th, 2013

Phillip Price posted this at When you SHARE poorly researched infographics….

Ride with Hitler

Two questions:

  1. Your suggestions for a line about topic maps (same image)?
  2. What other “classic” posters merit re-casting to promote topic maps?

I am not sure how to adapt the Scot Towel poster that headlines:

Is your washroom breeding Bolsheviks?


How To Make That One Thing Go Viral

Monday, January 14th, 2013

How To Make That One Thing Go Viral (Slideshare)

From the description:

Everyone wants to know how to make that one thing go viral. Especially bosses. Here’s the answer. So now maybe they will stop asking you. See the Upworthy version of this here:

Worth reviewing every week or so until it becomes second nature.

Somehow I doubt: “Topic Maps: Reliable Sharing of Content Across Semantic Domains” is ever going viral.

Well, one down, 24 more to go.


I first saw this at Four short links: 10 January 2013 by Nat Torkington.

…Self-Destructing Ads for Lingerie

Monday, January 7th, 2013

Grey Uses the New Facebook Poke to Create Self-Destructing Ads for Lingerie Onetime clip for onetime sale by Rebecca Cullers.

From the post:

Facebook has redesigned its Poke feature to allow people to send their friends video clips that self-destruct 10 seconds after opening. “Hey, that would be great for safe sexting!” you probably thought immediately. So, it shouldn’t come as a shock that the first advertiser to use the new Facebook Poke is a lingerie company. Delta Lingerie crafted a campaign with Grey Tel Aviv in which a 10-second clip of a model pulling on some Delta stockings—a video that couldn’t be saved or even shared—was sent to the model’s friends. A few seconds at the end directed them to Delta’s website to claim a “one-time” discount on the stockings. Since Facebook allows you to poke only 40 people at a time—and the app deletes the video on the sender’s end, too—the model’s agent had to shoot the same clip over and over again.

Certainly an interesting idea, self-destructing messages, particularly for college football coaches and others with lots of texting time on their hands.

Rather specialized though.

And for whatever reason people keep those sorts of messages.

Rather than encryption, which always attracts attention, what about transforming messages into “box scores” for some sport?

Something that might be overlooked when looking for “sexting” messages on a coaches phone?

Particularly if the transformation was a hidden part of message management, discoverable only on examination of the source code.

1,002 uses of topic maps?

What do you think?

The 2015 Digital Marketing Rule Book. Change or Perish.

Monday, January 9th, 2012

The 2015 Digital Marketing Rule Book. Change or Perish.

Avinash Kaushik writes:

It is the season to be predicting the future, but that is almost always a career-limiting move. So I’m not going to do that.

It is a lot easier to predict the present. So I’m not going to do that either.

Rather, I’m going to share a clump of realities/rules garnered from the present to help ready you for the predictable near future . Now here is the great part… if you follow these rules and act on these insights I believe you’ll be significantly better prepared for the unpredictable future.

Awesome right?

Now here’s another surprise: These rules/insights/mind shifts are not about data!

He covers a lot of interesting ground to conclude:

Do you agree with my learning that our primary problem is not web analytics/data but, rather, it is unimaginative web strategies?

My “take away” was much earlier in his post:

All while constantly optimizing your portfolio via controlled experiments .

For me the primary problem is two-fold:

  • web analytics/data as understood by management (not the users they are trying to reach), and
  • unimaginative web strategies

How can you have an imaginative or even intelligible web strategy unless and until you understand user behavior or their understanding of the data?

See my post on testing relevance tuning with the top ten actresses for 2011 as an example of questioning web analytics.

Google removes more search functionality

Saturday, December 17th, 2011

Google removes more search functionality by Phil Bradley.

From the post:

In Google’s apparently lemming like attempt to throw as much search functionality away as they can, they have now revamped their advanced search page. Regular readers will recall that I wrote about Google making it harder to find, and now they’re reducing the available options. The screen is now following the usual grey/white/read design, but to refresh your memory, this is what it used to look like:

Just in case you are looking for search opportunities in the near future.

The smart money says to not try to be everything to everybody. Pick off a popular (read advertising supporting) subpart of all content and work up really well. Offer users for that area what seem like useful defaults for that area. The defaults for television/movie types are likely to be different from the Guns & Ammo crowd. As would the advertising you would sell.

Remind me to write about using topic maps to create pull-model advertising. So that viewers pre-qualify themselves and you can charge more for “hits” on ads.

Serendipity Is Not An Intent

Tuesday, November 15th, 2011

Serendipity Is Not An Intent

From the post:

Wired had two amazing pieces on online advertising yesterday and while Felix Salmon’s piece The Future of Online Advertising could be Yieldbot’s manifesto it is the piece Can ‘Serendipity’ Be a Business Model? that deals more directly with our favorite topic, intent.


Twitter is the greatest discovery engine ever created on the web. But discovery can be and not be serendipitous. Sometimes,, as Dorsey alludes to, you discover things you had no idea existed but much more often you discover things after you have intent around what you want to discover. This is an important differentiation for Twitter to consider. It’s important because it’s a different algorithm.

Discovery intent is not an algo about “how do we introduce you to something that would otherwise be difficult for you to find, but something that you probably have a deep interest in?” There is no “introduce” and “probably” in the discovery intent algo. Most importantly, there is no “we.” It’s an algo about “how do you discover what you’re interested in.”

Discovering more about what you’re interested in has always been Twitter’s greatest strength. It leverages both user-defined inputs and the rich content streams where context and realtime matching can occur. Just like Search.

If Twitter wants to build a discovery system for advertising it should look like this. (emphasis added)

Inverts the advertising and when you think about it, the search algorithm. Rather than discovering, poorly, what interests the user or answer as question, enable the user to discover (a pull model) what interests them.

Completely different way of thinking about advertising and search.

Priesthood of the user? Worked (depending on who you ask) a long time ago.

Maybe, just maybe, a service architecture based on that as a goal, could disrupt the current “I know better than you” push models for search and advertising.