Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

April 22, 2016

Corporate Bribery/Corruption – Poland/U.S./Russia – A Trio

Filed under: Auditing,Business Intelligence,Government,Topic Maps — Patrick Durusau @ 2:22 pm

GIJN (Global Investigation Journalism Network) tweeted a link to Corporate misconduct – individual consequences, 14th Global Fraud Survey this morning.

From the foreword by David L. Stulb:

In the aftermath of recent major terrorist attacks and the revelations regarding widespread possible misuse of offshore jurisdictions, and in an environment where geopolitical tensions have reached levels not seen since the Cold War, governments around the world are under increased pressure to face up to the immense global challenges of terrorist financing, migration and corruption. At the same time, certain positive events, such as the agreement by the P5+1 group (China, France, Russia, the United Kingdom, the United States, plus Germany) with Iran to limit Iran’s sensitive nuclear activities are grounds for cautious optimism.

These issues contribute to volatility in financial markets. The banking sector remains under significant regulatory focus, with serious stress points remaining. Governments, meanwhile, are increasingly coordinated in their approaches to investigating misconduct, including recovering the proceeds of corruption. The reason for this is clear. Bribery and corruption continue to represent a substantial threat to sluggish global growth and fragile financial markets.

Law enforcement agencies, including the United States Department of Justice and the United States Securities and Exchange Commission, are increasingly focusing on individual misconduct when investigating impropriety. In this context, boards and executives need to be confident that their businesses comply with rapidly changing laws and regulations wherever they operate.

For this, our 14th Global Fraud Survey, EY interviewed senior executives with responsibility for tackling fraud, bribery and corruption. These individuals included chief financial officers, chief compliance officers, heads of internal audit and heads of legal departments. They are ideally placed to provide insight into the impact that fraud and corruption is having on business globally.

Despite increased regulatory activity, our research finds that boards could do significantly more to protect both themselves and their companies.

Many businesses have failed to execute anti-corruption programs to proactively mitigate their risk of corruption. Similarly, many businesses are not yet taking advantage of rich seams of information that would help them identify and mitigate fraud, bribery and corruption issues earlier.

Between October 2015 and January 2016, we interviewed 2,825 individuals from 62 countries and territories. The interviews identified trends, apparent contradictions and issues about which boards of directors should be aware.

Partners from our Fraud Investigation & Dispute Services practice subsequently supplemented the Ipsos MORI research with in-depth discussions with senior executives of multinational companies. In these interviews, we explored the executives’ experiences of operating in certain key business environments that are perceived to expose companies to higher fraud and corruption risks. Our conversations provided us with additional insights into the impact that changing legislation, levels of enforcement and cultural behaviors are having on their businesses. Our discussions also gave us the opportunity to explore pragmatic steps that leading companies have been taking to address these risks.

The executives to whom we spoke highlighted many matters that businesses must confront when operating across borders: how to adapt market-entry strategies in countries where cultural expectations of acceptable behaviors can differ; how to get behind a corporate structure to understand a third party’s true ownership; the potential negative impact that highly variable pay can have on incentives to commit fraud and how to encourage whistleblowers to speak up despite local social norms to the contrary, to highlight a few.

Our survey finds that many respondents still maintain the view that fraud, bribery and corruption are other people’s problems despite recognizing the prevalence of the issue in their own countries. There remains a worryingly high tolerance or misunderstanding of conduct that can be considered inappropriate — particularly among respondents from finance functions. While companies are typically aware of the historic risks, they are generally lagging behind on the emerging ones, for instance the potential impact of cybercrime on corporate reputation and value, while now well publicized, remains a matter of varying priority for our respondents. In this context, companies need to bolster their defenses. They should apply anti-corruption compliance programs, undertake appropriate due diligence on third parties with which they do business and encourage and support whistleblowers to come forward with confidence. Above all, with an increasing focus on the accountability of the individual, company leadership needs to set the right tone from the top. It is only by taking such steps that boards will be able to mitigate the impact should the worst happen.

This survey is intended to raise challenging questions for boards. It will, we hope, drive better conversations and ongoing dialogue with stakeholders on what are truly global issues of major importance.

We acknowledge and thank all those executives and business leaders who participated in our survey, either as respondents to Ipsos MORI or through meeting us in person, for their contributions and insights. (emphasis in original)

Apologies for the long quote but it was necessary to set the stage of the significance of:

…increasingly focusing on individual misconduct when investigating impropriety.

That policy grants a “bye” to corporations who benefit from individual mis-coduct, in favor of punishing individual actors within a corporation.

While granting the legitimacy of punishing individuals, corporations cannot act except by their agents, failing to punish corporations enables their shareholders to continue to benefit from illegal behavior.

Another point of significance, listing of countries on page 44, gives the percentage of respondents that agree “…bribery/corrupt practices happen widely…” as follows (in part):

Rank Country % Agree
30 Poland 34
31 Russia 34
32 U.S. 34

When the Justice Department gets hoity-toity about law and corruption, keep those figures in mind.

If the Justice Department representative you are talking to isn’t corrupt, it happens, there’s one on either side of them that probably is.

Topic maps can help ferret out or manage “corruption,” depending upon your point of view. Even structural corruption, take the U.S. political campaign donation process.

January 15, 2016

Big data ROI in 2016: Put up, or shut up [IT vendors to “become” consumer-centric?]

Filed under: BigData,Business Intelligence — Patrick Durusau @ 10:15 pm

Big data ROI in 2016: Put up, or shut up by David Weldon.

From the post:

When it comes to data analytics investments, this is the year to put up, or shut up.

That is the take of analysts at Forrester Research, who collectively expect organizations to take a hard look at their data analytics investments so far, and see some very real returns on those investments. If strong ROI can’t be shown, some data initiatives may see the plug pulled on those projects.

These sober warnings emerge from Forrester’s top business trends forecast for 2016. Rather than a single study or survey on top trends, the Forrester forecast combines the results of 35 separate studies. Carrie Johnson, senior vice president of research at Forrester, discussed the highlights with Information Management, including the growing impatience at many organizations that big data produce big results, and where organizations truly are on the digital transformation journey.

“I think one of the key surprises is that folks in the industry assume that everyone is farther along than they are,” Johnson explains. “Whether it’s with digital transformation, or a transformation to become a customer-obsessed firm, there are very few companies pulling off those initiatives at a wholesale level very well. Worse, many companies in the year ahead will continue to flail a bit with one-off projects and bold-on strategies, versus true differentiation through transformation.”

Asked why this misconception exists, Johnson notes that “Vendors do tend to paint a rosier picture of adoption in general because it behooves them. Also, every leader in an organization sees their problems, and then sees an article or sees the use of an app by a competitor and thinks, ‘my gosh, these companies are so far ahead of where we are.’ The reality may be that that app may have been an experiment by a really savvy team in the organization, but it’s not necessarily representative of a larger commitment by the organization, both financially and through resources.”

It’s not the first time you have heard data ROI discussed on this blog but when Forrester Research says it, it sounds more important. Moreover, their analysis is the result of thirty-five separate studies.

Empirical verification (the studies) are good to have but you don’t have to have an MBA to realize businesses that make decisions on some basis other than ROI, aren’t businesses very long. Or, at least not profitable businesses.

David’s conclusion makes it clear that your ROI is your responsibility:

The good news: “We believe that this is the year that IT leaders — and CIOs in particular … embrace a new way of investing in and running technology that is customer-centric….”

If lack of clarity and a defined ROI for IT is a problem at your business, well, it’s your money.

November 10, 2015

How Computers Broke Science… [Soon To Break Businesses …]

Filed under: Business Intelligence,Replication,Scientific Computing,Transparency — Patrick Durusau @ 3:04 pm

How Computers Broke Science — and What We can do to Fix It by Ben Marwick.

From the post:

Reproducibility is one of the cornerstones of science. Made popular by British scientist Robert Boyle in the 1660s, the idea is that a discovery should be reproducible before being accepted as scientific knowledge.

In essence, you should be able to produce the same results I did if you follow the method I describe when announcing my discovery in a scholarly publication. For example, if researchers can reproduce the effectiveness of a new drug at treating a disease, that’s a good sign it could work for all sufferers of the disease. If not, we’re left wondering what accident or mistake produced the original favorable result, and would doubt the drug’s usefulness.

For most of the history of science, researchers have reported their methods in a way that enabled independent reproduction of their results. But, since the introduction of the personal computer — and the point-and-click software programs that have evolved to make it more user-friendly — reproducibility of much research has become questionable, if not impossible. Too much of the research process is now shrouded by the opaque use of computers that many researchers have come to depend on. This makes it almost impossible for an outsider to recreate their results.

Recently, several groups have proposed similar solutions to this problem. Together they would break scientific data out of the black box of unrecorded computer manipulations so independent readers can again critically assess and reproduce results. Researchers, the public, and science itself would benefit.

Whether you are looking for specific proposals to make computed results capable of replication or quotes to support that idea, this is a good first stop.

FYI for business analysts, how are you going to replicate results of computer runs to establish your “due diligence” before critical business decisions?

What looked like a science or academic issue has liability implications!

Changing a few variables in a spreadsheet or more complex machine learning algorithms can make you look criminally negligent if not criminal.

The computer illiteracy/incompetence of prosecutors and litigants is only going to last so long. Prepare defensive audit trails to enable the replication of your actual* computer-based business analysis.

*I offer advice on techniques for such audit trails. The audit trails you choose to build are up to you.

October 26, 2015

Avoiding Big Data: More Business Intelligence Than You Would Think

Filed under: BigData,Business Intelligence — Patrick Durusau @ 8:29 pm

Observing that boosters of “big data” are in a near panic about the slow adoption of “big data” technologies requires no reference.

A recent report from Iron Mountain and PWC may shed some light on the reasons for slow adoption of “big data:”

iron-mountain-01

If you are in the 66% that extracts little or no value from your data, it makes no business sense buy into “big data” when you can’t derive value data already.

Does anyone seriously disagree with that statement? Other than people marketing services whether the client benefits or not.

The numbers get even worse:

From the executive summary:

We have identified a large ‘misguided majority’ – three in four businesses (76%) that are either constrained by legacy, culture, regulatory data issues or simply lack any understanding of the potential value held by their information. They have little comprehension of the commercial benefits to be gained and have therefore not made the investment required to obtain the information advantage.

Now we are up to 3/4 of the market that could not benefit from “big data” tools if they dropped from the sky tomorrow.

To entice you to download Seizing the Information Advantage (the full report):

Typical attributes and behaviours of the mis-guided majority

  • Information and exploitation of value from information is not a priority for senior leadership
  • An information governance oversight body, if it exists, is dominated by IT
  • Limited appreciation of how to exploit their information or the business benefits of doing so
  • Progress is allowed to be held back by legacy issues, regulatory issues and resources
  • Where resources are deployed to exploit information, this is often IT led, and is not linked to the overall business strategy
  • Limited ability to identify, manage and merge large amounts of data sources
  • Analytical capability may exist in the business but is not focused on information value
  • Excessive use of Excel spreadsheets with limited capacity to extract insight

Hmmm, 8 attributes and behaviours of the mis-guided majority (76%) and how many of those issues are addressed by big data technology?

Err, one. Yes?

Limited ability to identify, manage and merge large amounts of data sources

The other seven (7) attributes or behaviours that impede business from deriving value from data have little or no connection to big data technology.

Those are management, resources and social issues that no big data technology can address.

Avoidance of adoption of big data technology reveals a surprising degree of “business intelligence” among those surveyed.

A number of big data technologies will be vital to business growth, but if and only if the management and human issues are addressed that will enable their effective use.

Put differently, investment in big data technologies without addressing related management and human issues is a waste of resources. (full stop)


The report wasn’t all that easy to track down on the Iron Mountain site so here are some useful links:

Executive Summary

Seizing the Information Advantage (“free” but you have to give up your contact information)

Inforgraphic Summary


I first saw this at: 96% of Businesses Fail to Unlock Data’s Full Value by Bob Violino. Bob did not include a link to the report or sufficient detail to be useful.

June 13, 2015

Business Linkage Analysis: An Overview

Filed under: Business Intelligence,Topic Maps — Patrick Durusau @ 8:18 pm

Business Linkage Analysis: An Overview by Bob Hayes.

From the post:

Customer feedback professionals are asked to demonstrate the value of their customer feedback programs. They are asked: Does the customer feedback program measure attitudes that are related to real customer behavior? How do we set operational goals to ensure we maximize customer satisfaction? Are the customer feedback metrics predictive of our future financial performance and business growth? Do customers who report higher loyalty spend more than customers who report lower levels of loyalty? To answer these questions, companies look to a process called business linkage analysis.

Business Linkage Analysis is the process of combining different sources of data (e.g., customer, employee, partner, financial, and operational) to uncover important relationships among important variables (e.g., call handle time and customer satisfaction). For our context, linkage analysis will refer to the linking of other data sources to customer feedback metrics (e.g., customer satisfaction, customer loyalty).

Business Case for Linkage Analyses

Based on a recent study on customer feedback programs best practices (Hayes, 2009), I found that companies who regularly conduct operational linkages analyses with their customer feedback data had higher customer loyalty (72nd percentile) compared to companies who do conduct linkage analyses (50th percentile). Furthermore, customer feedback executives were substantially more satisfied with their customer feedback program in helping them manage customer relationships when linkage analyses (e.g., operational, financial, constituency) were a part of the program (~90% satisfied) compared to their peers in companies who did not use linkage analyses (~55% satisfied). Figure 1 presents the effect size for VOC operational linkage analyses.

Linkage analyses appears to have a positive impact on customer loyalty by providing executives the insights they need to manage customer relationships. These insights give loyalty leaders an advantage over loyalty laggards. Loyalty leaders apply linkage analyses results in a variety of ways to build a more customer-centric company: Determine the ROI of different improvement effort, create customer-centric operational metrics (important to customers) and set employee training standards to ensure customer loyalty, to name a few. In upcoming posts, I will present specific examples of linkage analyses using customer feedback data.

Discovering linkages between factors hidden in different sources of data?

Or as Bob summarizes:

Business linkage analysis is the process of combining different sources of data to uncover important insights about the causes and consequence of customer satisfaction and loyalty. For VOC programs, linkage analyses fall into three general types: financial, operational, and constituency. Each of these types of linkage analyses provide useful insight that can help senior executives better manage customer relationships and improve business growth. I will provide examples of each type of linkage analyses in following posts.

More posts in this series:

Linking Financial and VoC Metrics

Linking Operational and VoC Metrics

Linking Constituency and VoC Metrics

BTW, VoC = voice of customer.

A large and important investment, in data collection, linking and analysis.

Of course, you do have documentation for all the subjects that occur in your business linkage analysis? So that when that twenty-something who crunches all the numbers leaves, you won’t have to start from scratch? Yes?

Given the state of cybersecurity, I thought it better to ask than to guess.

Topic maps can save you from awkward questions about why the business linkage analysis reports are late. Or perhaps not coming until you can replace personnel and have them reconstruct the workflow.

Topic map based documentation is like insurance. You may not need it every day but after a mission critical facility burns to the ground, do you want to be the one to report that your insurance had lapsed?

June 4, 2015

TPP – Just One of Many Government Secrets

Filed under: Business Intelligence,Government,Transparency — Patrick Durusau @ 8:22 am

The Trans-Pacific Partnership is just one of many government secrets.

Reading Army ELA: Weapon Of Mass Confusion? by Kevin McLaughlin, I discovered yet another.

From the post:


As DISA and VMware work on a new JELA proposal, sources familiar with the matter said the relationship between the two is coming under scrutiny from other enterprise vendors. What’s more, certain details of the JELA remain shrouded in secrecy.

DISA’s JELA document contains several large chunks of redacted text, including one entire section titled “Determination Of Fair And Reasonable Cost.”

In other parts, DISA has redacted specific figures, such as VMware’s percentage of the DOD’s virtualized environments and the total amount the DOD has invested in VMware software licenses. The redacted portions have fueled industry speculation about why these and other aspects of the contract were deemed unfit for the eyes of the public.

DISA’s rationale for awarding the Army ELA and DOD JELA to VMware without opening it up to competition is also suspect, one industry executive who’s been tracking both deals told CRN. “Typically, the justification for sole-sourcing contracts to a vendor is that they only cover maintenance, yet these contracts obviously weren’t maintenance-only,” said the source.

The situation is complex but essentially the Army signed a contract with VMware that resulted in the Army downloading suites of software when it only wanted one particular part of the suite, but the Army was billed for maintenance cost on the entire suite.

That appears to be what was specified in the VMware ELA, which should be a motivation to using topic maps in connection with government contracts.

Did that go by a little fast? The jump from the VMware ELA to topic maps?

Think about it. The “Army” didn’t really sign a contract with “VMware” anymore than “VMware” signed a contract with the “Army.”

No, someone in particular, a nameable individual or group of nameable individuals, had meetings, reviews, and ultimately decided to agree to the contract between the “Army” and “VMware.” All of those individuals has roles in the proceedings that resulted in the ELA in question.

Yet, when it comes time to discuss the VMware ELA, the best we can do is talk about it as though these large organizations acted on their own. The only named individual who might be in some way responsible for the mess is the Army’s current CIO, Lt. Gen. Robert S. Ferrell, and he got there after the original agreement but before its later extension.

Topic maps, since we don’t have to plot the domain before we start uncovering relationships and roles, could easily construct a history of contacts (email, phone, physical meetings), aligned with documents (drafts, amendments), of all the individuals on all sides of this sorry episode.

Topic maps can’t guarantee that the government, the DOD in this case, won’t make ill-advised contracts in the future. No software can do that. What topic maps can do is trace responsibility for such contracts to named individuals. Having an accountable government means having accountable government employees.

PS: How does your government, agency, enterprise track responsibility?

PPS: Topic maps could also trace, given the appropriate information, who authorized the redactions to the DISA JELA. The first person who should be on a transport as a permanent advisor in Syria. Seriously. Remember what I said about accountable government requiring accountable employees.

May 29, 2015

Ponemon Data Breach Report Has No Business Intelligence

Filed under: Business Intelligence,Cybersecurity,Security — Patrick Durusau @ 8:42 pm

Study: Average cost of data breach is $6.5M by Ashley Carman.

From the post:

In a year already characterized by data breaches at recognizable healthcare organizations, such as CareFirst BlueCross BlueShield, and at major government entities, including the IRS, it’s no surprise that victims’ personal information is a hot commodity.

An annual study from the Ponemon Institute and IBM released on Wednesday found that the average cost per capita cost in a data breach increased to $217 in 2015 from $201 in 2014. Plus, the average total cost of a data breach increased to $6.5 million from $5.8 million the prior year.

The U.S. looked at 62 companies in 16 industry sectors after they experienced the loss or theft of protected personal data and then had to notify victims.

The Ponemon data breach study has no business intelligence. Despite a wealth of detail on expenses of data breaches, not a word on the corresponding costs to avoid those breaches.

Reminds me of saying “…solar panels provide renewable energy…,” which makes sense, if you ignore the multi-decade cost of recovering your investment. No sane business person would take that flyer.

But many will read anxiously that the “average” data breach cost is $6.5 million. If that were the cost to CareFirst BlueCross BlueShield, its charitable giving, $50,959,000 was over eight (8) times that amount, on a total revenue of $7.2 Billion dollars in 2011. Depending on the cost of greater security, $6.5 million may be a real steal.

Data breach reports should contain business intelligence. Business intelligence requires not only the cost of data breaches but the costs of reducing data breaches. And some methodology for determining which security measures reduce data breach costs by what percentage.

Without numbers and a methodology on determining cost of security improvements, file the Ponemon data breach report with 1970’s marketing literature on solar panels.

PS: Solar panels have become much more attractive in recent years but the point is that all business decisions should be made on the basis of cost versus benefit. The Ponemon report is just noise until there is a rational basis for business decisions in this area.

May 9, 2015

David Smith Slays Big Data Straw Person

Filed under: BigData,Business Intelligence — Patrick Durusau @ 4:36 pm

The Business Economics And Opportunity Of Open-Source Data Science by David Smith.

David sets out to slay the big data myth that: “It’s largely hype, with little practical business value.”

Saying:

The second myth, that big data is hype with no clear economic benefits, is also easy to disprove. The fastest-growing sectors of the global economy are enabled by big data technologies. Mobile and social services would be impossible today without big data fueled by open-source software. (Google’s search and advertising businesses were built on top of data science applications running on open-source software.)

You may have read on my blog earlier today, Slicing and Dicing Users – Google Style, which details how Google has built that search and advertising business. If rights to privacy don’t trouble you, Google’s business models beckons.

David is right that the straw person myth he erected, that big data is “…largely hype, with little practical business value,” is certainly a myth.

In his haste to slay that straw person, David overlooks is the repeated hype — there is value in big data. That is incorrect.

You can create value, lots of it, from big data, but that isn’t the same thing. Creating value from big data requires appropriate big data, technical expertise, a clear business plan for a product or service, marketing, all the things that any business requires.

The current hype about “…there is value in big data” reminds me of the header for lottery by the Virginia Company:

virginia-header

True enough, Virginia is quite valuable now and has been for some time. However, there was no gold on the ground to be gathered by the sack full and big data isn’t any different.

Value can and will be extracted from big data, but only by hard work and sound business plans.

Ask yourself, would you invest in a big data project proposed by this person?

john-smith

[Images are from: The Project Gutenberg EBook of The Virginia Company Of London, 1606-1624, by Wesley Frank Craven.]

PS: The vast majority of the time I deeply enjoy David Smith‘s posts but I do tire of seeing “there is value in big data” as a religious mantra at every turn. A number of investors are only going to hear “there is value in big data” and not stop to ask why or how? We all suffer when technology bubbles burst. Best not to build them at all.

March 6, 2015

Is Google Dazed and Confused?

Filed under: Business Intelligence — Patrick Durusau @ 10:37 am

I ask because after a lot of strong talk about security, I read Google backtracks on Android 5.0 default encryption by Kevin C. Tofel, to suggest that Google is backing off its promise of encryption by default. Kevin has the details but the changing requirement is due to performance issues, or so they say.

But don’t you think Google engineers, the same type of engineers who now routinely beat Atari games with an AI, knew there would be a performance hit from default encryption? Isn’t it little late in the game to claim that “performance” issues took you by surprise?

The other odd bit of news on Google was Google performs U-turn on Blogger smut rule by Lee Munson.

From the post:


However, many of the people who use the service to publish explicit content complained that Blogger was a means of expressing themselves. Now, it seems like Google has listened to them.

The company will instead focus its attention on preventing the distribution of commercial porn, illegal content and videos and images that have been published without the consent of any persons featured within them.

You have to wonder if Google is getting bad results from machine learning algorithms for business strategies or if they are ignoring the machine learning algorithms?

After hosting porn (a/ka/a personal “expression”) for more than a decade, what host would not expect massive pushback from changes to the rules? Should be easy enough to discover how many “expression” accounts exist on blogger.

Focusing on illegal content or publications without consent makes sense because there is corporate liability that follows notice of its presence. But that’s nothing new.

With the election cycle about to begin, the term flip-flop comes to mind.

Thoughts on who at Google has that much clout?

January 27, 2015

Business Analytics Error: Learn from Uber’s Mistake During the Sydney Terror Attack

Filed under: Algorithms,Business Intelligence,Machine Learning — Patrick Durusau @ 2:17 pm

Business Analytics Error: Learn from Uber’s Mistake During the Sydney Terror Attack by RK Paleru.

From the post:

Recently, as a sad day of terror ended in Sydney, a bad case of Uber’s analytical approach to pricing came to light – an “algorithm based price surge.” Uber’s algorithm driven price surge started overcharging people fleeing the Central Business District (CBD) of Sydney following the terror attack.

I’m not sure the algorithm got it wrong. If you asked me to drive into a potential war zone to ferry strangers out, I suspect a higher fee than normal is to be expected.

The real dilemma for Uber is that not all ground transportation has surge price algorithms. When buses, subways, customary taxis, etc. all have surge price algorithms, the price hikes won’t appear to be abnormal.

One of the consequences of an algorithm/data-driven world is that factors known or unknown to you may be driving the price or service. To say it another way, your “expectations” of system behavior may be at odds with how the system will behave.

The inventory algorithm at my local drugstore thought a recent prescription was too unusual to warrant stocking. My drugstore had to order it from a regional warehouse. Just-in-time inventory I think they call it. That was five (5) days ago. That isn’t “just-in-time” for the customer (me) but that isn’t the goal of most cost/pricing algorithms. Particularly when the customer has little choice about the service.

I first saw this in a tweet by Kirk Borne.

April 13, 2014

3 Common Time Wasters at Work

Filed under: Business Intelligence,Marketing,Topic Maps — Patrick Durusau @ 4:32 pm

3 Common Time Wasters at Work by Randy Krum.

See Randy’s post for the graphic but #2 was:

Non-work related Internet Surfing

It occurred to me that “Non-work related Internet Surfing” is indistinguishable from….search. At least at arm’s length or better.

And so many people search poorly that a lack of useful results is easy to explain.

Yes?

So, what is the strategy to get the rank and file to use more efficient information systems than search?

Their non-use or non-effective use of your system can torpedo a sale just as quickly as any other cause.

Suggestions?

February 21, 2014

Business Information Key Resources

Filed under: BI,Business Intelligence,Research Methods,Searching — Patrick Durusau @ 11:19 am

Business Information Key Resources by Karen Blakeman.

From the post:

On one of my recent workshops I was asked if I used Google as my default search tool, especially when conducting business research. The short answer is “It depends”. The long answer is that it depends on the topic and type of information I am looking for. Yes, I do use Google a lot but if I need to make sure that I have covered as many sources as possible I also use Google alternatives such as Bing, Millionshort, Blekko etc. On the other hand and depending on the type of information I require I may ignore Google and its ilk altogether and go straight to one or more of the specialist websites and databases.

Here are just a few of the free and pay-per-view resources that I use.

Starting points for research are a matter of subject, cost, personal preference, recommendations from others, etc.

What are your favorite starting points for business information?

February 4, 2014

Semantics of Business Vocabulary and Business Rules

Filed under: Business Intelligence,Semantics,Vocabularies — Patrick Durusau @ 4:52 pm

Semantics of Business Vocabulary and Business Rules

From 1.2 Applicability:

The SBVR specification is applicable to the domain of business vocabularies and business rules of all kinds of business activities in all kinds of organizations. It provides an unambiguous, meaning-centric, multilingual, and semantically rich capability for defining meanings of the language used by people in an industry, profession, discipline, field of study, or organization.

This specification is conceptualized optimally for business people rather than automated processing. It is designed to be used for business purposes, independent of information systems designs to serve these business purposes:

  • Unambiguous definition of the meaning of business concepts and business rules, consistently across all the terms, names and other representations used to express them, and across the natural languages in which those representations are expressed, so that they are not easily misunderstood either by “ordinary business people” or by lawyers.
  • Expression of the meanings of concepts and business rules in the wordings used by business people, who may belong to different communities, so that each expression wording is uniquely associated with one meaning in a given context.
  • Transformation of the meanings of concepts and business rules as expressed by humans into forms that are suitable to be processed by tools, and vice versa.
  • Interpretation of the meanings of concepts and business rules in order to discover inconsistencies and gaps within an SBVR Content Model (see 2.4) using logic-based techniques.
  • Application of the meanings of concepts and business rules to real-world business situations in order to enable reproducible decisions and to identify conformant and non-conformant business behavior.
  • Exchange of the meanings of concepts and business rules between humans and tools as well as between tools without losing information about the essence of those meanings.

I do need to repeat their warning from 6.2 How to Read this Specification:

This specification describes a vocabulary, or actually a set of vocabularies, using terminological entries. Each entry includes a definition, along with other specifications such as notes and examples. Often, the entries include rules (necessities) about the particular item being defined.

The sequencing of the clauses in this specification reflects the inherent logical order of the subject matter itself. Later clauses build semantically on the earlier ones. The initial clauses are therefore rather ‘deep’ in terms of SBVR’s grounding in formal logics and linguistics. Only after these clauses are presented do clauses more relevant to day-to-day business communication and business rules emerge.

This overall form of presentation, essential for a vocabulary standard, unfortunately means the material is rather difficult to approach. A figure presented for each sub-vocabulary does help illustrate its structure; however, no continuous ‘narrative’ or explanation is appropriate.

😉

OK. so you aren’t going to read it for giggles. But you will be encountering it in the wild world of data so at least mark the reference.

I first saw this in a tweet by Stian Danenbarger.

January 6, 2014

TU Delft Spreadsheet Lab

Filed under: Business Intelligence,Data Mining,Spreadsheets — Patrick Durusau @ 5:07 pm

TU Delft Spreadsheet Lab

From the about page:

The Delft Spreadsheet Lab is part of the Software Engineering Research Group of the Delft University of Technology. The lab is headed by Arie van Deursen and Felienne Hermans. We work on diverse topics concerning spreadsheets, such as spreadsheet quality, design patterns testing and refactoring. Our current members are:

This project started last June so there isn’t a lot of content here, yet.

Still, I mention it as a hedge against the day that some CEO “discovers” all the BI locked up in spreadsheets that are scattered from one end of their enterprise to another.

Perhaps they will name it: Big Relevant Data, or some such.

Oh, did I mention that spreadsheets have no change tracking? Or means to document as part of the spreadsheet the semantics of it data or operations?

At some point those and other issues are going to become serious concerns, not to mention demands upon IT to do something, anything.

For IT to have a reasoned response to demands of “do something, anything,” a better understanding of spreadsheets is essential.

PS: Before all the Excel folks object that Excel does track changes, you might want to read: Track Changes in a Shared Workbook. As Obi-Wan Kenobi would say, “it’s true, Excel does track changes, from a certain point of view.” 😉

December 3, 2013

Announcing Open LEIs:…

Filed under: Business Intelligence,Identifiers,Open Data — Patrick Durusau @ 11:04 am

Announcing Open LEIs: a user-friendly interface to the Legal Entity Identifier system

From the post:

Today, OpenCorporates announces a new sister website, Open LEIs, a user-friendly interface on the emerging Global Legal Entity Identifier System.

At this point many, possibly most, of you will be wondering: what on earth is the Global Legal Entity Identifier System? And that’s one of the reasons why we built Open LEIs.

The Global Legal Entity Identifier System (aka the LEI system, or GLEIS) is a G20/Financial Stability Board-driven initiative to solve the issues of identifiers in the financial markets. As we’ve explained in the past, there are a number of identifiers out there, nearly all of them proprietary, and all of them with quality issues (specifically not mapping one-to-one with legal entities). Sometimes just company names are used, which are particularly bad identifiers, as not only can they be represented in many ways, they frequently change, and are even reused between different entities.

This problem is particularly acute in the financial markets, meaning that regulators, banks, market participants often don’t know who they are dealing with, affecting everything from the ability to process trades automatically to performing credit calculations to understanding systematic risk.

The LEI system aims to solve this problem, by providing permanent, IP-free, unique identifiers for all entities participating in the financial markets (not just companies but also municipalities who issue bonds, for example, and mutual funds whose legal status is a little greyer than companies).

The post cites five key features for Open LEIs:

  1. Search on names (despite slight misspellings) and addresses
  2. Browse the entire (100,000 record) database and/or filter by country, legal form, or the registering body
  3. A permanent URL for each LEI
  4. Links to OpenCorporate for additional data
  5. Data is available as XML or JSON

As the post points out, the data isn’t complete but dragging legal entities out into the light is never easy.

Use this resource and support it if you are interested in more and not less financial transparency.

October 17, 2013

Predictive Analytics 101

Filed under: Analytics,Business Intelligence,Predictive Analytics — Patrick Durusau @ 6:05 pm

Predictive Analytics 101 by Ravi Kalakota.

From the post:

Insight, not hindsight is the essence of predictive analytics. How organizations instrument, capture, create and use data is fundamentally changing the dynamics of work, life and leisure.

I strongly believe that we are on the cusp of a multi-year analytics revolution that will transform everything.

Using analytics to compete and innovate is a multi-dimensional issue. It ranges from simple (reporting) to complex (prediction).

Reporting on what is happening in your business right now is the first step to making smart business decisions. This is the core of KPI scorecards or business intelligence (BI). The next level of analytics maturity takes this a step further. Can you understand what is taking place (BI) and also anticipate what is about to take place (predictive analytics).

By automatically delivering relevant insights to end-users, managers and even applications, predictive decision solutions aims to reduces the need of business users to understand the ‘how’ and focus on the ‘why.’ The end goal of predictive analytics = [Better outcomes, smarter decisions, actionable insights, relevant information].

How you execute this varies by industry and information supply chain (Raw Data -> Aggregated Data -> Contextual Intelligence -> Analytical Insights (reporting vs. prediction) -> Decisions (Human or Automated Downstream Actions)).

There are four types of data analysis:

    • Simple summation and statistics
    • Predictive (forecasting),
    • Descriptive (business intelligence and data mining) and
    • Prescriptive (optimization and simulation)

Predictive analytics leverages four core techniques to turn data into valuable, actionable information:

  1. Predictive modeling
  2. Decision Analysis and Optimization
  3. Transaction Profiling
  4. Predictive Search

This post is a very good introduction to predictive analytics.

You may have to do some hand holding to get executives through it but they will be better off for it.

When you need support for more training of executives, use this graphic from Ravi’s post:

useful data gap

That startled even me. 😉

May 12, 2013

Contextifier: Automatic Generation of Annotated Stock Visualizations

Filed under: Annotation,Business Intelligence,Interface Research/Design,News — Patrick Durusau @ 4:36 pm

Contextifier: Automatic Generation of Annotated Stock Visualizations by Jessica Hullman, Nicholas Diakopoulos and Eytan Adar.

Abstract:

Online news tools—for aggregation, summarization and automatic generation—are an area of fruitful development as reading news online becomes increasingly commonplace. While textual tools have dominated these developments, annotated information visualizations are a promising way to complement articles based on their ability to add context. But the manual effort required for professional designers to create thoughtful annotations for contextualizing news visualizations is difficult to scale. We describe the design of Contextifier, a novel system that automatically produces custom, annotated visualizations of stock behavior given a news article about a company. Contextifier’s algorithms for choosing annotations is informed by a study of professionally created visualizations and takes into account visual salience, contextual relevance, and a detection of key events in the company’s history. In evaluating our system we find that Contextifier better balances graphical salience and relevance than the baseline.

The authors use a stock graph as the primary context in which to link in other news about a publicly traded company.

Other aspects of Contextifier were focused on enhancement of that primary context.

The lesson here is that a tool with a purpose is easier to hone than a tool that could be anything for just about anybody.

I first saw this at Visualization Papers at CHI 2013 by Enrico Bertini.

April 11, 2013

Spreadsheet is Still the King of all Business Intelligence Tools

Filed under: Business Intelligence,Marketing,Spreadsheets,Topic Maps — Patrick Durusau @ 4:01 pm

Spreadsheet is Still the King of all Business Intelligence Tools by Jim King.

From the post:

The technology consulting firm Gartner Group Inc. once precisely predicated that BI would be the hottest technology in 2012. The year of 2012 witnesses the sharp and substantial increase of BI. Unexpectedly, spreadsheet turns up to be the one developed and welcomed most, instead of the SAP BusinessObjects, IBM Cognos, QlikTech Qlikview, MicroStrateg, or TIBCO Spotfire. In facts, no matter it is in the aspect of total sales, customer base, or the increment, the spreadsheet is straight the top one.

Why the spreadsheet is still ruling the BI world?

See Jim’s post for the details but the bottom line was:

It is the low technical requirement, intuitive and flexible calculation capability, and business-expert-oriented easy solution to the 80% BI problems that makes the spreadsheet still rule the BI world.

Question:

How do you translate:

  • low technical requirement
  • intuitive and flexible calculation capacity (or its semantic equivalent)
  • business-expert-oriented solution to the 80% of BI problems

into a topic map application?

March 21, 2013

Should Business Data Have An Audit Trail?

Filed under: Auditing,Business Intelligence,Datomic,Transparency — Patrick Durusau @ 11:19 am

The “second slide” I would lead with from Stuart Halloway’s Datomic, and How We Built It would be:

Should Business Data Have An Audit Trail?

Actually Stuart’s slide #65 but who’s counting? 😉

Stuart points out the irony of git, saying:

developer data is important enough to have an audit trail, but business data is not

Whether business data should always have an audit trail would attract shouts of yes and no, depending on the audience.

Regulators, prosecutors, good government types, etc., mostly shouting yes.

Regulated businesses, security brokers, elected officials, etc., mostly shouting no.

Some in between.

Datomic, which has some common characteristics with topic maps, gives you the ability to answer these questions:

  • Do you want auditable business data or not?
  • If yes to auditable business data, to what degree?

Rather different that just assuming it isn’t possible.

Abstract:

Datomic is a database of flexible, time-based facts, supporting queries and joins, with elastic scalability and ACID transactions. Datomic queries run your application process, giving you both declarative and navigational access to your data. Datomic facts (“datoms”) are time-aware and distributed to all system peers, enabling OLTP, analytics, and detailed auditing in real time from a single system.

In this talk, I will begin with an overview of Datomic, covering the problems that it is intended to solve and how its data model, transaction model, query model, and deployment model work together to solve those problems. I will then use Datomic to illustrate more general points about designing and implementing production software, and where I believe our industry is headed. Key points include:

  • the pragmatic adoption of functional programming
  • how dynamic languages fare in mission- and performance- critical settings
  • the importance of data, and the perils of OO
  • the irony of git, or why developers give themselves better databases than they give their customers
  • perception, coordination, and reducing the barriers to scale

Resources

  • Video from CME Group Technology Conference 2012
  • Slides from CME Group Technology Conference 2012

February 17, 2013

REMOTE: Office Not Required

Filed under: Books,Business Intelligence — Patrick Durusau @ 8:18 pm

REMOTE: Office Not Required

From the post:

As an employer, restricting your hiring to a small geographic region means you’re not getting the best people you can. As an employee, restricting your job search to companies within a reasonable commute means you’re not working for the best company you can. REMOTE, the new book by 37signals, shows both employers and employees how they can work together, remotely, from any desk, in any space, in any place, anytime, anywhere.

REMOTE will be published in the fall of 2013 by Crown (Random House).

I was so impressed by Rework (see: Emulate Drug Dealers [Marketing Topic Maps]) that I am recommending REMOTE ahead of its publication.

Whether the lessons in REMOTE will be heard by most employers or shall we say their managers, remains to be seen.

Perhaps performance in revenue and the stock market will be important clues. 😉

February 12, 2013

How to Implement Lean BI

Filed under: Business Intelligence — Patrick Durusau @ 6:19 pm

How to Implement Lean BI by Steve Dine.

A followup to his Why Most BI Programs Under-Deliver Value.

General considerations:

Many people hear the word “Lean” and it conjures up images of featureless tools, limited budgets, reduced development and the elimination of jobs. Dispelling those myths out of the gate is crucial in order to garner support for implementing Lean BI from the organization and the BI team. If team members feel that by becoming lean they are working themselves out of a job then they will not support your efforts. If your customers feel that they will receive less service or be relegated to using suboptimal tools then they may not support your efforts as well.

So, what is Lean BI? Lean BI is about focusing on customer value and generating additional value by accomplishing more with existing resources by eliminating waste….

Some highlights:

  1. Focus on Customer Value

    Value is defined as meeting or exceeding the customer needs at a specific cost at a specific time and, as mentioned in my last article, can only be defined by the customer. Anything that consumes resources that does not deliver customer value is considered waste….

  2. See the Whole Picture

    Learn to see beyond each individual architectural decision, organizational issue or technical problem by considering how they relate in a wider context. When business users make decisions and solve problems, they often only consider the immediate symptom rather than the root cause issue….

  3. Iterate Quickly

    It is often the case that by the time a project is implemented, the requirements have changed and part of what is implemented is not required anymore or is no longer a priority. When features, reports and data elements are implemented that aren’t utilized, it is considered waste….

  4. Reduce Variation

    Variation in BI is caused by a lack of standardization in processes, design, procedures, development and practices. Variation is introduced when work is initiated and implemented both inside and outside of the BI group. It causes waste in a number of ways including the added time to reverse engineer what others have developed, recovering ETL jobs caused by maintenance overlap, the extra time searching for scripts and reports, and the duplication of development caused by two developers working on the same file….

  5. Pursue Perfection

    Perfection is a critical component of Lean BI even though the key to successfully pursuing it is the understanding that you will never get there. The key to pursuing perfection is to focus on continuous improvement in an increment fashion….

Read Steve’s post for more analysis and his suggestions on possible solutions to these issues.

From a topic map perspective:

  1. Focus on Customer Value: A topic map solution can focus on specifics that return ROI to the customer. If you don’t need or want particular forms of inferencing, they can be ignored.
  2. See the Whole Picture: A topic map can capture and preserve relationships between businesses processes. Particularly ones discovered in earlier projects. Enabling teams to make new mistakes, not simply repeat old ones.
  3. Iterate Quickly: With topic maps you aren’t bound to decisions may by projects such as SUMO or Cyc. Your changes and models are just that, yours. You don’t need anyone’s permission to make changes.
  4. Reduce Variation: Some variation can be reduced but other variation, between departments or locations may successfully resist change. Topic maps can document variation and provide mappings to get around resistance to eliminating variation.
  5. Pursue Perfection: Topic maps support incremental change by allowing you to choose how much change you can manage. Not to mention that systems can still appear to other users as though they are unchanged. Unseen change is the most acceptable form of change.

Highly recommend you read both of Steve’s posts.

February 10, 2013

Why Most BI Programs Under-Deliver Value

Filed under: Business Intelligence,Data Integration,Data Management,Integration,Semantics — Patrick Durusau @ 1:52 pm

Why Most BI Programs Under-Deliver Value by Steve Dine.

From the post:

Business intelligence initiatives have been undertaken by organizations across the globe for more than 25 years, yet according to industry experts between 60 and 65 percent of BI projects and programs fail to deliver on the requirements of their customers.

This impact of this failure reaches far beyond the project investment, from unrealized revenue to increased operating costs. While the exact reasons for failure are often debated, most agree that a lack of business involvement, long delivery cycles and poor data quality lead the list. After all this time, why do organizations continue to struggle with delivering successful BI? The answer lies in the fact that they do a poor job at defining value to the customer and how that value will be delivered given the resource constraints and political complexities in nearly all organizations.

BI is widely considered an umbrella term for data integration, data warehousing, performance management, reporting and analytics. For the vast majority of BI projects, the road to value definition starts with a program or project charter, which is a document that defines the high level requirements and capital justification for the endeavor. In most cases, the capital justification centers on cost savings rather than value generation. This is due to the level of effort required to gather and integrate data across disparate source systems and user developed data stores.

As organizations mature, the number of applications that collect and store data increase. These systems usually contain few common unique identifiers to help identify related records and are often referred to as data silos. They also can capture overlapping data attributes for common organizational entities, such as product and customer. In addition, the data models of these systems are usually highly normalized, which can make them challenging to understand and difficult for data extraction. These factors make cost savings, in the form of reduced labor for data collection, easy targets. Unfortunately, most organizations don’t eliminate employees when a BI solution is implemented; they simply work on different, hopefully more value added, activities. From the start, the road to value is based on a flawed assumption and is destined to under deliver on its proposition.

This post merits a close read, several times.

In particular I like the focus on delivery of value to the customer.

Err, that would be the person paying you to do the work.

Steve promises a follow-up on “lean BI” that focuses on delivering more value that it costs to deliver.

I am inherently suspicious of “lean” or “agile” approaches. I sat on a committee that was assured by three programmers they had improved upon IBM’s programming methodology but declined to share the details.

Their requirements document for a content management system, to be constructed on top of subversion, was a paragraph in an email.

Fortunately the committee prevailed upon management to tank the project. The programmers persist, management being unable or unwilling to correct past mistakes.

I am sure there are many agile/lean programming projects that deliver well documented, high quality results.

But I don’t start with the assumption that agile/lean or other methodology projects are well documented.

That is a question of fact. One that can be answered.

Refusal to answer due to time or resource constraints, is a very bad sign.

I first saw this in a top ten tweets list from KDNuggets.

October 9, 2012

“The treacherous are ever distrustful…” (Gandalf to Saruman at Orthanc)

Filed under: Business Intelligence,Marketing,Transparency — Patrick Durusau @ 12:29 pm

Andrew Gelman’s post: Ethical standards in different data communities reminded me of this quote from The Two Towers (Lord of the Rings, Book II, J.R.R. Tolkien).

Andrew reports on a widely repeated claim by a former associate of a habitual criminal offender enterprise that recent government statistics were “cooked” to help President Obama in his re-election campaign.

After examining motives for “cooking” data and actual instances of data being “cooked” (by the habitual criminal offender enterprise), Andrew remarks:

One reason this interests me is the connection to ethics in the scientific literature. Jack Welch has experience in data manipulation and so, when he sees a number he doesn’t like, he suspects it’s been manipulated.

The problem is that anyone searching for this accusation or further information about the former associate or the habitual criminal offender enterprise, is unlike to encounter GE: Decades of Misdeeds and Wrongdoing.

Everywhere the GE stock ticker appears, there should be a link to: GE Corporate Criminal History. With links to the original documents, including pleas, fines, individuals, etc. Under whatever name or guise the activity was conducted.

This isn’t an anti-corruption rant. People in other criminal offender enterprises should be able to judge for themselves the trustworthiness of their individual counter-parts in other enterprises.

Although, someone willing to cheat the government is certainly ready to cheat you.

Topic maps can deliver that level of transparency.

Or not, if you the sort with a “cheating heart.”

September 14, 2012

First Party Fraud (In Four Parts)

Filed under: Business Intelligence,Graphs,Networks,Social Graphs,Social Networks — Patrick Durusau @ 1:00 pm

Mike Betron as written a four-part series on first party fraud that merits your attention:

First Part Fraud [Part 1]

What is First Party Fraud?

First-party fraud (FPF) is defined as when somebody enters into a relationship with a bank using either their own identity or a fictitious identity with the intent to defraud. First-party fraud is different from third-party fraud (also known as “identity fraud”) because in third-party fraud, the perpetrator uses another person’s identifying information (such as a social security number, address, phone number, etc.). FPF is often referred to as a “victimless” crime, because no consumers or individuals are directly affected. The real victim in FPF is the bank, which has to eat all of the financial losses.

First-Party Fraud: How Do We Assess and Stop the Damage? [Part 2]

Mike covers the cost of first party fraud and then why it is so hard to combat.

Why is it so hard to detect FPF?

Given the amount of financial pain incurred by bust-out fraud, you might wonder why banks haven’t developed a solution and process for detecting and stopping it.

There are three primary reasons why first-party fraud is so hard to identify and block:

1) The fraudsters look like normal customers

2) The crime festers in multiple departments

3) The speed of execution is very fast

Fighting First Party Fraud With Social Link Analysis (3 of 4)

And you know, those pesky criminals won’t use their universally assigned identifiers for financial transactions. (Any security system that relies on good faith isn’t a security system, it’s an opportunity.)

A Trail of Clues Left by Criminals

Although organized fraudsters are sophisticated, they often leave behind evidence that can be used to uncover networks of organized crime. Fraudsters know that due to Know Your Customer (KYC) and Customer Due Diligence (CDD) regulations, their identification will be verified when they open an account with a financial institution. To pass these checks, the criminals will either modify their own identity slightly or else create a synthetic identity, which consists of combining real identity information (e.g., a social security number) with fake identity information (names, addresses, phone numbers, etc.).

Fortunately for banks, false identity information can be expensive and inconvenient to acquire and maintain. For example, apartments must be rented out to maintain a valid address. Additionally, there are only so many cell phones a person can carry at one time and only so many aliases that can be remembered. Because of this, fraudsters recycle bits and pieces of these valuable assets.

The reuse of identity information has inspired Infoglide to begin to create new technology on top of its IRE platform called Social Link Analysis (SLA). SLA works by examining the “linkages” between the recycled identities, therefore identifying potential fraud networks. Once the networks are detected, Infoglide SLA applies advanced analytics to determine the risk level for both the network and for every individual associated with that network.

First Party Fraud (post 4 of 4) – A Use Case

As discussed in our previous blog in this series, Social Link Analysis works by identifying linkages between individuals to create a social network. Social Link Analysis can then analyze the network to identify organized crime, such as bust-out fraud and internal collusion.

During the Social Link Analysis process, every individual is connected to a single network. An analysis at a large tier 1 bank will turn up millions of networks, but the majority of individuals only belong to very small networks (such as a husband and wife, and possibly a child). However, the social linking process will certainly turn up a small percentage of larger networks of interconnected individuals. It is in these larger networks where participants of bust-out fraud are hiding.

Due to the massive number of networks within a system, the analysis is performed mathematically (e.g. without user interface) and scores and alerts are generated. However, any network can be “visualized” using the software to create a graphic display of information and connections. In this example, we’ll look at a visualization of a small network that the social link analysis tool has alerted as a possible fraud ring.

A word of caution.

To leap from the example individuals being related to each other to:

As a result, Social Link Analysis has detected four members of a network, each with various amounts of charged-off fraud.

Is quite a leap.

Having charged off loans, with re-use of telephone numbers and a mobile population, doesn’t necessarily mean anyone is guilty of “charged-off fraud.”

Could be, but you should tread carefully and with legal advice before jumping to conclusions of fraud.

For good customer relations, if not avoiding bad PR and legal liability.

PS: Topic maps can help with this type of data. Including mapping in the bank locations or even personnel who accepted particular loans.

August 13, 2012

Are You An IT Hostage?

As I promised last week in From Overload to Impact: An Industry Scorecard on Big Data Business Challenges [Oracle Report], the key finding that is missing from Oracle’s summary:

Executives’ Biggest Data Management Gripes:*

#1 Don’t have the right systems in place to gather the information we need (38%)

#2 Can’t give our business managers access to the information they need; need to rely on IT (36%)

Ask your business managers: Do they feel like IT hostages?

You are likely to be surprised at the answers you get.

IT’s vocabulary acts as an information clog.

A clog that impedes the flow of information in your organization.

Information that can improve the speed and quality of business decision making.

The critical point is: Information clogs are bad for business.

Do you want to borrow my plunger?

August 10, 2012

From Overload to Impact: An Industry Scorecard on Big Data Business Challenges [Oracle Report]

From Overload to Impact: An Industry Scorecard on Big Data Business Challenges [Oracle Report]

Summary:

IT powers today’s enterprises, which is particularly true for the world’s most data-intensive industries. Organizations in these highly specialized industries increasingly require focused IT solutions, including those developed specifically for their industry, to meet their most pressing business challenges, manage and extract insight from ever-growing data volumes, improve customer service, and, most importantly, capitalize on new business opportunities.

The need for better data management is all too acute, but how are enterprises doing? Oracle surveyed 333 C-level executives from U.S. and Canadian enterprises spanning 11 industries to determine the pain points they face regarding managing the deluge of data coming into their organizations and how well they are able to use information to drive profit and growth.

Key Findings:

  • 94% of C-level executives say their organization is collecting and managing more business information today than two years ago, by an average of 86% more
  • 29% of executives give their organization a “D” or “F” in preparedness to manage the data deluge
  • 93% of executives believe their organization is losing revenue – on average, 14% annually – as a result of not being able to fully leverage the information they collect
  • Nearly all surveyed (97%) say their organization must make a change to improve information optimization over the next two years
  • Industry-specific applications are an important part of the mix; 77% of organizations surveyed use them today to run their enterprise—and they are looking for more tailored options

What key finding did they miss?

They cover it in the forty-two (42) page report but it doesn’t appear here.

Care to guess what it is?

Forgotten key finding post coming Monday, 13 August 2012. Watch for it!

I first saw this at Beyond Search.

July 18, 2012

Building a Simple BI Solution in Excel 2013 (Part 1 & 2)

Filed under: Business Intelligence,Excel — Patrick Durusau @ 6:39 pm

Chris Webb writes up a quick BI solution in Excel 2013:

Building a Simple BI Solution in Excel 2013, Part 1

and

Building a Simple BI Solution in Excel 2013, Part 2

In the process Chris uncovers some bugs and disappointments, but on the whole the application works.

I mention it for a couple of reasons.

If you recall, something like 75% of the BI market is held by Excel. I don’t expect that to change any time soon.

What do you think happens when “self-service” BI applications are created by users? Other than becoming the default applications for offices and groups in organizations?

Will different users are going to make different choices with their Excel BI applications?

Will users with different Excel BI applications resort to knives, if not guns, to avoid changing their Excel BI applications?

Excel in its many versions leads to varying and inconsistent “self-service” applications in 75% of the BI marketplace.

Is it just me or does that sound like an opportunity for topic maps to you?

July 7, 2012

Subverting Ossified Departments [Moving beyond name calling]

Filed under: Analytics,Business Intelligence,Marketing,Topic Maps — Patrick Durusau @ 10:21 am

Brian Sommer has written on why analytics will not lead to new revenue streams, improved customer service, better stock options or other signs of salvation:

The Ossified Organization Won’t ‘Get’ Analytics (part 1 of 3)

How Tough Will Analytics Be in Ossified Firms? (Part 2 of 3)

Analytics and the Nimble Organization (part 3 of 3)

Why most firms won’t profit from analytics:

… Every day, companies already get thousands of ideas for new products, process innovations, customer interaction improvements, etc. and they fail to act on them. The rationale for this lack of movement can be:

– That’s not the way we do things here

– It’s a good idea but it’s just not us

– It’s too big of an idea

– It will be too disruptive

– We’d have to change so many things

– I don’t know who would be responsible for such a change

And, of course,

– It’s not my job

So if companies don’t act on the numerous, free suggestions from current customers and suppliers, why are they so deluded into thinking that IT-generated, analytic insights will actually fare better? They’re kidding themselves.

[part 1]

What Brian describes in amusing and great detail are all failures that no amount of IT, analytics or otherwise, can address. Not a technology problem. Not even an organization (as in form) issue.

It is a personnel issue. You can either retrain (I find unlikely to succeed) or you can get new personnel. it really is that simple. And with a glutted IT market, now would be the time to recruit an IT department not wedded to current practices. But you would need to do the same in accounting, marketing, management, etc.

But calling a department “ossified” is just name calling. You have to move beyond name calling to establish a bottom line reason for change.

Assuming you have access, topic maps can help you integrate data across department that don’t usually interchange data. So you can make the case for particular changes in terms of bottom line expenses.

Here is a true story with the names omitted and the context changed a bit:

Assume you are a publisher of journals, with both institutional and personal subscriptions. One of the things that all periodical publishers have to address are claims for “missing” issues. It happens, mail room mistakes, postal system errors, simply lost in transit, etc. Subscribers send in claims for those missing issues.

Some publishers maintain records of all subscriptions, including any correspondence and records, which are consulted by some full time staffer who answers all “claim” requests. One argument being there is a moral obligation to make sure non-subscribers don’t get an issue to which they are not entitled. Seriously, I have heard that argument made.

Analytics and topic maps could combine the subscription records with claim records and expenses for running the claims operation to show the expense of detailed claim service. Versus the cost of having the mail room toss another copy back to the requester. (Our printing cost was $3.00/copy so the math wasn’t the hard part.)

Topic maps help integrate the data you “obtain” from other departments. Just enough to make your point. Don’t have to integrate all the data, just enough to win the argument. Until the next argument comes along and you take a bit bigger bite of the apple.

Agile organizations are run by people agile enough to take control of them.

You can wait for permission from an ossified organization or you can use topic maps to take the first “bite.”

Your move.

PS: If you have investments in journal publishing you might want to check on claims handling.

June 30, 2012

50 Open Source Replacements for Proprietary Business Intelligence Software

Filed under: Business Intelligence,Excel — Patrick Durusau @ 6:49 pm

50 Open Source Replacements for Proprietary Business Intelligence Software by Cynthia Harvey.

From the post:

In a recent Gartner survey, CIOs picked business intelligence and analytics as their top technology priority for 2012. The market research firm predicts that enterprises will spend more than $12 billion on business intelligence (BI), analytics and performance management software this year alone.

As the market for business intelligence solutions continues to grow, the open source community is responding with a growing number of applications designed to help companies store and analyze key business data. In fact, many of the best tools in the field are available under an open source license. And enterprises that need commercial support or other services will find many options available.

This month, we’ve put together a list of 50 of the top open source business intelligence tools that can replace proprietary solutions. It includes complete business intelligence platforms, data warehouses and databases, data mining and reporting tools, ERP suites with built-in BI capabilities and even spreadsheets. If we’ve overlooked any tools that you feel should be on the list, please feel free to note them in the comments section below.

A very useful listing of “replacements” for proprietary software in part because it includes links to the software to be replaced.

You will find it helpful in identifying software packages with common goals but diverse outputs, grist for topic map mills.

I tried to find a one-page display (print usually works) but you will have to endure the advertising clutter to see the listing.

PS: Remember that MS Excel seventy-five (75%) percent of the BI market. Improve upon/use an MS Excel result, you are closer to a commercially viable product. (BI’s Dirty Secrets – Why Business People are Addicted to Spreadsheets)

June 22, 2012

Business Intelligence and Reporting Tools (BIRT)

Filed under: BIRT,Business Intelligence,Reporting — Patrick Durusau @ 3:50 pm

Business Intelligence and Reporting Tools (BIRT)

From the homepage:

BIRT is an open source Eclipse-based reporting system that integrates with your Java/Java EE application to produce compelling reports.

Being reminded by the introduction that reports can consist of lists, charts, crosstabs, letters & documents, compound reports, I was encouraged to see:

BIRT reports consist of four main parts: data, data transforms, business logic and presentation.

  • Data – Databases, web services, Java objects all can supply data to your BIRT report. BIRT provides JDBC, XML, Web Services, and Flat File support, as well as support for using code to get at other sources of data. BIRT’s use of the Open Data Access (ODA) framework allows anyone to build new UI and runtime support for any kind of tabular data. Further, a single report can include data from any number of data sources. BIRT also supplies a feature that allows disparate data sources to be combined using inner and outer joins.
  • Data Transforms – Reports present data sorted, summarized, filtered and grouped to fit the user’s needs. While databases can do some of this work, BIRT must do it for “simple” data sources such as flat files or Java objects. BIRT allows sophisticated operations such as grouping on sums, percentages of overall totals and more.
  • Business Logic – Real-world data is seldom structured exactly as you’d like for a report. Many reports require business-specific logic to convert raw data into information useful for the user. If the logic is just for the report, you can script it using BIRT’s JavaScript support. If your application already contains the logic, you can call into your existing Java code.
  • Presentation – Once the data is ready, you have a wide range of options for presenting it to the user. Tables, charts, text and more. A single data set can appear in multiple ways, and a single report can present data from multiple data sets.

I was clued into BIRT by Actuate, so you might want to pay them a visit as well.

Anytime you are manipulating data, for analysis or reporting, you are working with subjects.

Topic maps are a natural for planning or documenting your transformations or reports.

Or let me put it this way: Do you really want to hunt down what you think you did six months ago for the last report? And then spend a day or two in frantic activity correcting what you mis-remember? There are other options. Your choice.

Older Posts »

Powered by WordPress