Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

January 31, 2016

Danger of Hackers vs. AI

Filed under: Artificial Intelligence,Cybersecurity,Security — Patrick Durusau @ 9:11 pm

An interactive graphical history of large data breaches by Mark Gibbs.

From the post:

If you’re trying to convince your management to beef up the organization’s security to protect against data breaches, an interactive infographic from Information Is Beautiful might help.

Built with IIB’s forthcoming VIZsweet data visualization tools, the World’s Biggest Data Breaches visualization combines data from DataBreaches.net, IdTheftCentre, and press reports to create a timeline of breaches that involved the loss of 30,000 or more records (click the image below to go to the interactive version). What’s particularly interesting is that while breaches were caused by accidental publishing, configuration errors, inside job, lost or stolen computer, lost or stolen media, or just good old poor security, the majority of events and the largest, were due to hacking.

Make sure the powers that be understand that you don’t have to be a really big organization for a serious data breach to happen.

See Mark’s post for the image and link to the interactive graphic.

Hackers (human intelligence) are kicking cybersecurity’s ass 24 x 7.

Danger of AI (artificial intelligence), maybe, someday, it might be a problem, but we don’t know or to what extent.

What priority do you assign these issues in your IT budget?

If you said hackers are #1, congratulations! You have an evidence-based IT budgeting process.

Otherwise, well, see you at DragonCon. I’m sure you will have lots of free time when you aren’t in the unemployment line.

PS: Heavy spending on what is mis-labeled as “artificial intelligence” is perfectly legitimate. Think of it as training computers to do tasks humans can’t do or that machines can do more effectively. Calling it AI loads it with unnecessary baggage.

Google’s Go Victory/AI Danger Summarized In One Sentence

Filed under: Artificial Intelligence,Machine Learning — Patrick Durusau @ 8:53 pm

Google’s Go Victory Is Just A Glimpse Of How Powerful AI Will Be by Cade Metz.

Cade manages to summarize the implications of the Google Go victory and the future danger of AI in one concise sentence:

Bostrom’s book makes the case that AI could be more dangerous than nuclear weapons, not only because human could misuse it but because we could build AI systems that we are somehow not able to control.

If you don’t have time for the entire article, that sentence summarizes the article as well.

Pay particular attention to the part that reads: “…that we are somehow not able to control.

Is that like a Terex 33-19 “Titan”

640px-SparTitan

with a nuclear power supply and no off switch? (Yes, that is a person in the second wheel from the front.)

We learned only recently that consciousness, at least as we understand the term now, is a product of chaotic and cascading connections. Consciousness May Be the Product of Carefully Balanced Chaos [Show The Red Card].

One supposes that positronic brains (warning: fiction) must share that chaotic characteristic.

However, Cade and Bostrom fail to point to any promising research on the development of positronic brains.

That’s not to deny that poor choices could be made by an AI designed by Aussies. If projected global warming exceeds three degrees Celsius, set off a doomsday bomb. (On the Beach)

The lesson there is two-fold: Don’t build doomsday weapons. Don’t put computers in charge of them.

The danger from AI is in the range of a gamma ray burst ending civilization. If that high.

On the other hand, if you want work has a solid background in science fiction, prone to sound bites in the media and attracts doomsday groupies of all genders, it doesn’t require a lot of research.

The only real requirement is to wring your hands over some imagined scenario that you can’t say will occur or how that will doom us all. Throw in some of the latest buzz words and you have a presentation/speech/book.

Experts, Sources, Peer Review, Bad Poetry and Flint, Michigan.

Filed under: Peer Review,Skepticism — Patrick Durusau @ 7:54 pm

Red faces at National Archive after Baldrick poem published with WW1 soldiers’ diaries.

From the post:

Officials behind the launch of a major initiative detailing lives of ordinary soldiers during the First World War were embarrassed by the discovery that they had mistakenly included the work of Blackadder character, Baldrick, in the achieve release.

The work, entitled ‘The German Guns’ and attributed to Private S.O. Baldrick, was actually written by the sitcom’s writers Richard Curtis and Ben Elton some 70 years after the end of the conflict. Elton was reported to be “delighted at the news” and friends said he was already checking to see if royalty payments may be due.

Although the archive release was scrutinised by experts, it is understood that the Baldrick poem was approved after a clerk recalled hearing Education Secretary Michael Gove referring to Baldrick in relation to the Great War, and assumed that he was of contemporary cultural significance.

Another illustration that experts and peer review aren’t the gold standards of correctness.

Or to put it differently: Mistakes happen, especially without sources.

If the only surviving information was Education Secretary Michael Gove referring to Baldrick, not only would the mistake be perpetuated but it would be immune to correction.

Citing and/or pointing to a digital resource that was the origin of the poem, would be more likely to trip warnings (by date of publication) or contain a currently recognizable reference, such as Blackadder.

The same lesson should be applied to reports such as Michael Moore’s claim:

1. While the Children in Flint Were Given Poisoned Water to Drink, General Motors Was Given a Special Hookup to the Clean Water. A few months after Gov. Snyder removed Flint from the clean fresh water we had been drinking for decades, the brass from General Motors went to him and complained that the Flint River water was causing their car parts to corrode when being washed on the assembly line. The governor was appalled to hear that GM property was being damaged, so he jumped through a number of hoops and quietly spent $440,000 to hook GM back up to the Lake Huron water, while keeping the rest of Flint on the Flint River water. Which means that while the children in Flint were drinking lead-filled water, there was one—and only one—address in Flint that got clean water: the GM factory.

Verification is especially important for me because I think Michael Moore is right and that predisposes me to accept his statements, without evidence.

In no particular order:

  • What “brass” from GM? Names, addresses, contact details. Links to statements?
  • What evidence did the “brass” present? Documents? Minutes of the meeting? Date?
  • What hoops did the Governor jump through? Who else in state government was aware of the request?
  • Where is the disbursement order for the $400,000 and related work orders?
  • Who was aware of any or all of these steps, in and out of government?

Those are some of the questions to ask to verify Michael Moore’s claim and, just as importantly, to lay a trail of knowledge and responsibility for the damage to the citizens of Flint.

Just because it was your job to hook GM back up to clean water, knowing that the citizens of Flint would be drinking water that corrodes auto parts, doesn’t make it right.

There are obligations that transcend personal interests or those of government.

Not poisoning innocents is one of those.

If there were sources for Michael’s account, people could start to be brought to justice. (See, sources really are important.)

January 30, 2016

9 “Laws” for Data Mining [Be Careful With #5]

Filed under: Data Mining,Data Science — Patrick Durusau @ 10:13 pm

9 “Laws” for Data Mining

A Forbes piece on “laws” for data mining, that are equally applicable to data science.

Being Forbes, technology is valuable because it has value for business, not because “everyone is doing it,” “it’s really cool technology,” “it’s a graph,” or “it will bring all humanity to a new plane of existence.”

To be honest, Forbes is a welcome relief some days.

But even Forbes stumbles, as with law #5:

5. There are always patterns: In practice, your data always holds useful information to support decision-making and action.

What? “…your data always holds useful information to support decision-making and action.

That’s as nutty as the “new plane of existence” stuff.

When I say “nutty,” I mean that in a professional sense. The term apohenia was coined to label the tendency to see meaningful patterns in random data. (Yes, that includes your data.) Apophenia.

The original work described the “…onset of delusional thinking in pyschosis.”

No doubt you will find patterns in your data but that the patterns “…holds useful information to support decision-making and action” isn’t a given.

That is an echo of the near fanatic belief that if businesses used big data, they would be more profitable.

Most of the other “laws” are more plausible than #5, but even there, don’t abandon your judgement even if Forbes says that something is so.

I first saw this in a tweet by Data Science Renee.

Tip #20: Play with Racket [Computer Science for Everyone?]

Filed under: Computer Science,Education,Programming — Patrick Durusau @ 2:23 pm

Tip #20: Play with Racket by Aaron Quint and Michael R. Bernstein.

From the post:

Racket is a programming language in the Lisp tradition that is different from other programming languages in a few important ways. It can be any language you want – because Racket is heavily used for pedagogy, it has evolved into a suite of languages and tools that you can use to explore as many different programming paradigms as you can think of. You can also download it and play with it right now, without installing anything else, or knowing anything at all about computers or programming. Watching Matthias Felleisen’s “big-bang: the world, universe, and network in the programming language” talk will give you an idea of how Racket can be used to help people learn how to think about mathematics, computation, and more. Try it out even if you “hate Lisp” or “don’t know how to program” – it’s really a lot of fun.

Aaron and Michael scooped President Obama’s computer science skills for everyone by a day:

President Barack Obama said Saturday he will ask Congress for billions of dollars to help students learn computer science skills and prepare for jobs in a changing economy.

“In the new economy, computer science isn’t an optional skill. It’s a basic skill, right along with the three R’s,” Obama said in his weekly radio and Internet address….(Obama Wants $4B to Help Students Learn Computer Science)

The “computer science for everyone” is a popular chant but consider the Insecure Internet of Things (IIoT).

Will minimal computer science skills increase or decrease the level of security for the IIoT?

That’s what I think too.

Removal of IoT components is the only real defense. Expect a vibrant cottage industry to grow up around removing IoT components.

January 29, 2016

What is this Hadoop thing? [What’s Missing From This Picture?]

Filed under: BigData,Hadoop — Patrick Durusau @ 5:02 pm

Kirk Borne posted this image to Twitter today:

hadoop-thing

You have seen this image or similar ones. About Hadoop, Big Data, non-IT stuff, etc. You can probably recite the story with your eyes closed, even when you are drunk or stoned. 😉

But today I realized not only who is in the image but who’s missing. Especially in a Hadoop/Big Data context.

Who’s in the image? Customers. They are the blind actors who would not recognize Hadoop in a closet with the light on. They have no idea what relevance Hadoop has to their data and/or any possible benefit to their business problems.

Who’s not in the image? Marketers. They are just out of view in this image. Once they learn a customer has data, they have the solution, Hadoop. “What do you want Hadoop to do exactly?” marketers ask before directly a customer to a particular part of the Hadoop/elephant.

Lo and behold, data salvation is at hand! May the IT gods be praised! We are going to have big data with Hadoop, err, ah, pushing, dragging, well, we’ll get to the specifics later.

The crux of every business problem is a business and not technological need.

You may not be able to store full bandwidth teleconference videos but if you don’t do any video teleconferencing, that’s not really your problem.

If you are already running three shifts making as many HellFire missiles as you can, there isn’t much point in building a recommendation system for your sales department to survey your customers.

Go into every IT conversation with a list of your business needs and require that proposed solutions address those needs, in defined and measurable ways.

You can avoid feeling up an elephant while someone loots your wallet.

Twitter Graph Analytics From NodeXL (With Questions)

Filed under: Graphics,Graphs,NodeXL,Twitter — Patrick Durusau @ 4:35 pm

I’m sure you have seen this rather impressive Twitter graphic:

node-js-graph

And you can see a larger version, with a link to the interactive version here: https://nodexlgraphgallery.org/Pages/Graph.aspx?graphID=61591

Impressive visualization but…, tell me, what can you learn from these tweets about big data?

I mean, visualization is a great tool but if I am not better informed after using the visualization than before, what’s the point?

If you go to the interactive version, you will find lists derived from the data, such as “top 10 vertices, ranked by Betweeness Centrality,” top 10 URLs in the graph and groups in the graph, top domains in the graph and groups in the graph, etc.

None of which is evident from casual inspection of the graph. (Top influencers might be if I could get the interactive version to resize but difficult unless the step between #11 and #10 was fairly large.

Nothing wrong with eye candy but for touting the usefulness of visualization, let’s look for more intuitive visualizations.

I saw this particular version in a tweet by Kirk D. Borne.

Alert the ASPCA: Roo Bombs

Filed under: Humor — Patrick Durusau @ 11:36 am

I was surprised to see major news outlets with variations on this headline: Teen Accused of ISIS Plot to Bomb Cops Using Kangaroo: Reports.

In On the Beach, Aussies didn’t seem this imaginative.

Spoiler space:

“We may die so let’s kill ourselves.”

A big jump in imagination skills from suicide when fearing death to using or fearing a kangaroo as a means of terrorist activity.

Either the Aussies were inaccurately portrayed in On the Beach or their development of imagination skills has been off the charts in the past sixty or so years.

From the post:

“The conversation continues with Besim detailing what he did that day and they have a general discussion around animals and wildlife in Australia including a suggestion that a kangaroo could be packed with C4 explosive, painted with the [ISIS] symbol and set loose on police officers,” the prosecution summary said, according to the ABC.

I can’t read that summary without either laughing out loud or at least smiling.

Even a clueless Westerner such as myself can imagine the difficulty of painting anything on tame animal, much less a kangaroo. Roos, so far as I know, don’t have political affiliations and would object to being painted on general principles.

Moreover, how do you “set [a kangaroo] loose on police officers”?

Training one would be a real challenge for an ADD afflicted generation. Tossing a rock is the typical level of planning and execution one can expect.

Perhaps stabbing/shooting but then you have to remember to bring a weapon. Happens but not common.

It isn’t hard to imagine some stoner saying they want to develop an attack Roo and then laughing their asses off but for the authorities to take it seriously demonstrates a decided lack of humor.

What if Besim had proposed teaching thousands of budgies common children’s names, coating them with LSD to be released as part Anzac Day celebrations? Free budgies and a little something extra for the holiday.

Oops! Are the Australian police going to come knocking on my door?

The only effective weapon against government bed-wetters selling fear of terrorism is mockery.

Conjure up competing absurdities for every reported terrorist arrest + absurdity!

Sure, there will be office disputes at holiday parties but that’s hardly terrorism, unless you want to mis-label it so. We have funeral shootings in Atlanta. Hardly terrorism.

Not to mention a recent snow event killed more US citizens than ISIS has. That wasn’t labeled terrorism. Although, I suppose the Islamic State could claim, along with Pat Robertson, control of the weather and take credit for it.

BTW, if anyone gives you a free budgie, be sure to wash it and your hands carefully. 😉

PS: So you will know a potential Roo bomb when you see one (minus the ISIS symbol):

512px-RedRoo

January 28, 2016

Twitter Account Details

Filed under: Twitter — Patrick Durusau @ 8:00 pm

Kirk D. Borne tweeted this page which returns all of his Twitter details.

You can try mine as well, patrickDurusau.

The generic link: http://www.twitteraccountsdetails.com/

Enjoy!

BTW, if you aren’t already following Kirk D. Borne, you should be.

Math whizzes of ancient Babylon figured out forerunner of calculus

Filed under: Corporate Memory,History,Language,Memory — Patrick Durusau @ 5:53 pm

The video is very cool and goes along with:

Math whizzes of ancient Babylon figured out forerunner of calculus by Ron Cowen.

sn-babylonians

What could have happened if a forerunner to calculus wasn’t forgotten for 1400 years?

A sharper question would be:

What if you didn’t lose corporate memory with every promotion, retirement or person leaving the company?

We have all seen it happen and all of us have suffered from it.

What if the investment in expertise and knowledge wasn’t flushed away with promotion, retirement, departure?

That would have to be one helluva ontology to capture everyone’s expertise and knowledge.

What if it wasn’t a single, unified or even “logical” ontology? What if it only represented the knowledge that was important to capture for you and yours? Not every potential user for all time.

Just as we don’t all wear the same uniforms to work everyday, we should not waste time looking for a universal business language for corporate memory.

Unless you are in the business of filling seats for such quixotic quests.

I prefer to deliver a measurable ROI if its all the same to you.

Are you ready to stop hemorrhaging corporate knowledge?

New Nvidia Resources – Data Science Bowl [Topology and Aligning Heart Images?]

Filed under: GPU,Medical Informatics,NVIDIA — Patrick Durusau @ 5:22 pm

New Resources Available to Help Participants by Pauline Essalou.

From the post:

Hungry for more help? NVIDIA can feed your passion and fuel your progress.

The free course includes lecture recordings and hands-on exercises. You’ll learn how to design, train, and integrate neural network-powered artificial intelligence into your applications using widely-used open source frameworks and NVIDIA software.

Visit NVIDIA at: https://developer.nvidia.com/deep-learning-courses

For access to the hands-on labs for free, you’ll need to register, using the promo code KAGGLE, at: https://developer.nvidia.com/qwiklabs-signup

With weeks to go until the March 7 stage one deadline and stage two data release deadline, there’s still plenty of time for participants to take advantage of these tools and continue to submit solutions. Visit the Data Science Bowl Resources page for a complete listing of free resources.

If you aren’t already competing, the challenge in brief:

Declining cardiac function is a key indicator of heart disease. Doctors determine cardiac function by measuring end-systolic and end-diastolic volumes (i.e., the size of one chamber of the heart at the beginning and middle of each heartbeat), which are then used to derive the ejection fraction (EF). EF is the percentage of blood ejected from the left ventricle with each heartbeat. Both the volumes and the ejection fraction are predictive of heart disease. While a number of technologies can measure volumes or EF, Magnetic Resonance Imaging (MRI) is considered the gold standard test to accurately assess the heart’s squeezing ability.

The challenge with using MRI to measure cardiac volumes and derive ejection fraction, however, is that the process is manual and slow. A skilled cardiologist must analyze MRI scans to determine EF. The process can take up to 20 minutes to complete—time the cardiologist could be spending with his or her patients. Making this measurement process more efficient will enhance doctors’ ability to diagnose heart conditions early, and carries broad implications for advancing the science of heart disease treatment.

The 2015 Data Science Bowl challenges you to create an algorithm to automatically measure end-systolic and end-diastolic volumes in cardiac MRIs. You will examine MRI images from more than 1,000 patients. This data set was compiled by the National Institutes of Health and Children’s National Medical Center and is an order of magnitude larger than any cardiac MRI data set released previously. With it comes the opportunity for the data science community to take action to transform how we diagnose heart disease.

This is not an easy task, but together we can push the limits of what’s possible. We can give people the opportunity to spend more time with the ones they love, for longer than ever before. (From: https://www.kaggle.com/c/second-annual-data-science-bowl)

Unlike the servant with the one talent, Nvidia isn’t burying its talent under a basket. It is spreading access to its information as far as possible, in contrast to editorial writers at the New England Journal of Medicine.

Care to guess who is going to have the greater impact on cardiology and medicine?

I forgot to mention that Nietzsche described the editorial page writers of the New England Journal of Medicine quite well when he said, “…they tell the proper time and make a modest noise when doing so….” (Of Scholars).

I first saw this in a tweet by Kirk D. Borne.

PS: Kirk pointed to Image Preprocessing: The Challenges and Approach by Peter VanMaasdam today.

Are you surprised that the data is dirty? 😉

I’m not a professional mathematicians but what if you created a common topology for hearts and then treated the different measurements for each one as dimensions?

I say that having recently read: Quantum algorithms for topological and geometric analysis of data by Seth Lloyd, Silvano Garnerone & Paolo Zanardi. Nature Communications 7, Article number: 10138 doi:10.1038/ncomms10138, Published 25 January 2016.

Whether you have a quantum computer or not, given the small size of the heart data set, some of those methods might be applicable.

Unless my memory fails me, the entire GPU Gems series in online at Nvidia and has several chapters on topological methods.

Good luck!

Federal Cybersecurity Egg Roll Event Continues (DHS)

Filed under: Cybersecurity,Security — Patrick Durusau @ 4:26 pm

If you remember my posts, “Cybersecurity Sprint or Multi-Year Egg Roll?” from last June (2015), and Fed Security Sprint – Ans: Multi-Year Egg Roll (Nov. 2015), there is further confirmation of the projected duration of the egg roll from the GAO.

The GAO report, DHS Needs to Enhance Capabilities, Improve Planning, and Support Greater Adoption of Its National Cybersecurity Protection System

The executive summary prepares the reader for 61 pages of grim reading:

The Department of Homeland Security’s (DHS) National Cybersecurity Protection System (NCPS) is partially, but not fully, meeting its stated system objectives:

  • Intrusion detection: NCPS provides DHS with a limited ability to detect potentially malicious activity entering and exiting computer networks at federal agencies. Specifically, NCPS compares network traffic to known patterns of malicious data, or “signatures,” but does not detect deviations from predefined baselines of normal network behavior. In addition, NCPS does not monitor several types of network traffic and its “signatures” do not address threats that exploit many common security vulnerabilities and thus may be less effective.
  • Intrusion prevention: The capability of NCPS to prevent intrusions (e.g., blocking an e-mail determined to be malicious) is limited to the types of network traffic that it monitors. For example, the intrusion prevention function monitors and blocks e-mail. However, it does not address malicious content within web traffic, although DHS plans to deliver this capability in 2016.
  • Analytics: NCPS supports a variety of data analytical tools, including a centralized platform for aggregating data and a capability for analyzing the characteristics of malicious code. In addition, DHS has further enhancements to this capability planned through 2018.
  • Information sharing: DHS has yet to develop most of the planned functionality for NCPS’s information-sharing capability, and requirements were only recently approved. Moreover, agencies and DHS did not always agree about whether notifications of potentially malicious activity had been sent or received, and agencies had mixed views about the usefulness of these notifications. Further, DHS did not always solicit—and agencies did not always provide—feedback on them.

In addition, while DHS has developed metrics for measuring the performance of NCPS, they do not gauge the quality, accuracy, or effectiveness of the system’s intrusion detection and prevention capabilities. As a result, DHS is unable to describe the value provided by NCPS.

Regarding future stages of the system, DHS has identified needs for selected capabilities. However, it had not defined requirements for two capabilities: to detect (1) malware on customer agency internal networks or (2) threats entering and exiting cloud service providers. DHS also has not considered specific vulnerability information for agency information systems in making risk-based decisions about future intrusion prevention capabilities.

Federal agencies have adopted NCPS to varying degrees. The 23 agencies required to implement the intrusion detection capabilities had routed some traffic to NCPS intrusion detection sensors. However, only 5 of the 23 agencies were receiving intrusion prevention services, but DHS was working to overcome policy and implementation challenges. Further, agencies have not taken all the technical steps needed to implement the system, such as ensuring that all network traffic is being routed through NCPS sensors. This occurred in part because DHS has not provided network routing guidance to agencies. As a result, DHS has limited assurance regarding the effectiveness of the system.

The brightest part of the report is that DHS “concurred with GAO’s recommendations.

That’s a far cry from the state of total denial at the Office of Personnel Management last year. DHS is acknowledging its problems. Whether than translates into fixing those problems remains to be seen.

(Do you know the fate of the management incompetents at OPM? Just curious who is being inflicted with their incompetence now.)

I truly hate to say anything nice about the DHS but one must give the devil his due.

Unfortunately for the DHS, elected leaders don’t understand that need, desire, importance, are all non-factors in technical success. You may not like mendelian genetics, but as Stalin discovered, you pursue other models at your own risk.

The same is true for cybersecurity.

Typesetting Mathematics According to the ISO Standard [Access to ISO 80000-2:2009]

Filed under: Mathematics,Typography — Patrick Durusau @ 3:30 pm

Typesetting Mathematics According to the ISO Standard by Nick Higham.

From the post:

In The Princeton Companion to Applied Mathematics we used the conventions that the constants e (the base of the natural logarithm) and i (the imaginary unit), and the d in derivatives and integrals, are typeset in an upright font. These conventions are part of an ISO standard, ISO 80000-2:2009. The standard is little-known, though there is an excellent article about it in TUGboat by Claudio Beccari, and Kopka and Daly’s A Guide to \LaTeX has a page on the standard (in section 7.4.10 of the fourth edition and section 5.4.10 of the third edition).

Nick mentions that you can get a copy of ISO 80000-2:2009 for about $150, and he also says:

However, it is easy to find a freely downloadable version via a Google search.

Let’s don’t be coy about this sort of thing: try http://www.ise.ncsu.edu/jwilson/files/mathsigns.pdf

Every time an illegitimate privilege is acknowledged, it grows stronger.

I refuse to confer any legitimacy or recognition of legitimacy to restricted access to ISO 80000-2:2009.

And you?

Looking For Hidden Tear (rasomware) Source?

Filed under: Cybersecurity,Security — Patrick Durusau @ 10:18 am

Disappointed to read David Bisson advocating security through secrecy in Ransomware author tries to blackmail security researcher into taking down ‘educational’ malware project.

From the post:

The author of the Magic ransomware unsuccessfully attempted to blackmail a security researcher into taking down two open-source ‘educational’ malware projects on GitHub.

Magic, a malicious program which is written in C# and which demands 1 Bitcoin from its victims, is the second strain of ransomware discovered in January to have been built on malware that has been made available to the public for ‘educational’ purposes.

The first threat, Ransom_Cryptear.B, is based on an open-source project called Hidden Tear, which is currently hosted by Turkish security researcher Utku Sen on his GitHub page.

Whether Sen is able to recover the victims’ files without working with the ransomware author remains to be seen. However, what is abundantly clear is Sen’s foolishness in releasing ransomware code as open-source. Though such a move might have educational motives at heart, this will not stop malicious and inexperienced attackers from co-opting the ransomware code for their own purposes.

Going forward, researchers should never make ransomware code available beyond the labs where they study it. Ordinary users will surely benefit in the long run.

See David’s post for the details. My concern is his advocacy of non-publication of ransomware code.

McAfee Labs 2016 Threats Predictions report makes it clear that “malicious and inexperienced attackers” are not the source of great concern for ransomware.

…..
In 2015 we saw ransomware-as-a-service hosted on the Tor network and using virtual currencies for payments. We expect to see more of this in 2016, as inexperienced cybercriminals will gain access to this service while staying relatively anonymous.

Although a few families—including CryptoWall 3, CTB-Locker, and CryptoLocker—dominate the current ransomware landscape, we predict that new variants of these families and new families will surface with new stealth functionalities. For example, new variants may start to silently encrypt data. These encrypted files will be backed up and eventually the attacker will pull the key, resulting in encrypted files both on the system and in the backup. Other new variants might use kernel components to hook the file system and encrypt files on the fly, as the user accesses them.
….. (at page 24)

Amateurs aren’t building “ransomware-as-a-service” sites and there’s no reason to pretend otherwise.

Moreover, the “good old boy network” of security researchers hasn’t protected anyone from ransomware if the McAfee Labs and similar reports are to be credited. If concealment of security flaws and malware were effective, there should be some evidence to that effect. Yes?

In the absence of evidence, dare I say “data?,” we should dismiss concealment as a strategy for cybersecurity as utter speculation. Speculation that favors a particular class of researchers. (Can you guess their gender statistics?)

In case you are interested, the Github page for Hidden Tear now reads in part:

This project is abandoned. If you are a researcher and want the code, contact me with your university or company e-mail http://utkusen.com/en/contact.html

Well, no harm done. If you are looking for the master.zip file for Hidden Tear, check the Internet Archive: Wayback Machine, or more directly, the backup of the Hidden Tear project on 26 January 2016.

You can presume that copies have been made of the page and master.zip file, just in case something unfortunate happens to the copies at the Internet Archive: Wayback Machine.

Better software, user education, legal actions against criminals are all legitimate and effective means of combating the known problem of ransomware.

Concealing ransomware code is a form of privilege. As we all know, privilege has an unhappy record in computer programming and society in general. Don’t support it, here or anywhere.

Consciousness May Be the Product of Carefully Balanced Chaos [Show The Red Card]

Filed under: Artificial Intelligence,EU,Human Cognition,Machine Learning — Patrick Durusau @ 8:31 am

Consciousness May Be the Product of Carefully Balanced Chaos by sciencehabit.

From the posting:

The question of whether the human consciousness is subjective or objective is largely philosophical. But the line between consciousness and unconsciousness is a bit easier to measure. In a new study (abstract) of how anesthetic drugs affect the brain, researchers suggest that our experience of reality is the product of a delicate balance of connectivity between neurons—too much or too little and consciousness slips away. During wakeful consciousness, participants’ brains generated “a flurry of ever-changing activity”, and the fMRI showed a multitude of overlapping networks activating as the brain integrated its surroundings and generated a moment to moment “flow of consciousness.” After the propofol kicked in, brain networks had reduced connectivity and much less variability over time. The brain seemed to be stuck in a rut—using the same pathways over and over again.

These researchers need to be shown the red card as they say in soccer.

I thought it was agreed that during the Human Brain Project, no one would research or publish new information about the human brain, in order to allow the EU project to complete its “working model” of the human brain.

The Human Brain Project is a butts in seats and/or hotels project and a gum ball machine will be able to duplicate its results. But discovering vast amounts of unknown facts demonstrates the lack of an adequate foundation for the project at its inception.

In other words, more facts may decrease public support for ill-considered WPA projects for science.

Calling the “judgement,” favoritism would be a more descriptive term, of award managers into question, surely merits the “red card” in this instance.

(Note to readers: This post is to be read as sarcasm. The excellent research reported Enzo Tagliazucchi, et al. in Large-scale signatures of unconsciousness are consistent with a departure from critical dynamics is an indication of some of the distance between current research and replication of a human brain.)

The full abstract if you are interested:

Loss of cortical integration and changes in the dynamics of electrophysiological brain signals characterize the transition from wakefulness towards unconsciousness. In this study, we arrive at a basic model explaining these observations based on the theory of phase transitions in complex systems. We studied the link between spatial and temporal correlations of large-scale brain activity recorded with functional magnetic resonance imaging during wakefulness, propofol-induced sedation and loss of consciousness and during the subsequent recovery. We observed that during unconsciousness activity in frontothalamic regions exhibited a reduction of long-range temporal correlations and a departure of functional connectivity from anatomical constraints. A model of a system exhibiting a phase transition reproduced our findings, as well as the diminished sensitivity of the cortex to external perturbations during unconsciousness. This framework unifies different observations about brain activity during unconsciousness and predicts that the principles we identified are universal and independent from its causes.

The “official” version of this article lies behind a paywall but you can see it at: http://arxiv.org/pdf/1509.04304.pdf for free.

Kudos to the authors for making their work accessible to everyone!

I first saw this in a Facebook post by Simon St. Laurent.

Large-scale Conspiracies Fail On Revelation? – A Contrary Example

Filed under: Peer Review,Security — Patrick Durusau @ 8:00 am

Large-scale conspiracies would quickly reveal themselves, equations show

From the post:

While we can all keep a secret, a study by Dr David Robert Grimes suggests that large groups of people sharing in a conspiracy will very quickly give themselves away. The study is published online by journal PLOS ONE.

Dr Grimes, a physicist working in cancer research, is also a science writer and broadcaster. His profile means that he receives many communications from people who believe in science-related conspiracies. Those messages prompted him to look at whether large-scale collusions were actually tenable.

He explained: ‘A number of conspiracy theories revolve around science. While believing the moon landings were faked may not be harmful, believing misinformation about vaccines can be fatal. However, not every belief in a conspiracy is necessarily wrong — for example, the Snowden revelations confirmed some theories about the activities of the US National Security Agency.

He then looked at the maximum number of people who could take part in an intrigue in order to maintain it. For a plot to last five years, the maximum was 2521 people. To keep a scheme operating undetected for more than a decade, fewer than 1000 people can be involved. A century-long deception should ideally include fewer than 125 collaborators. Even a straightforward cover-up of a single event, requiring no more complex machinations than everyone keeping their mouth shut, is likely to be blown if more than 650 people are accomplices.

Dr. Grimes equates revelation with “failure” of a conspiracy.

But what of conspiracies that are “revealed” that don’t fail? Conspiracies sustained in spite of revelation of the true state of affairs.

Peer review has been discredited too often to require citation. But, for the sake of tradition: NIH grants could be assigned by lottery as effectively as the present grant process, …lotteries to pick NIH research-grant recipients, editors and peer reviewers fail to catch basic errors, Science self-corrects – instantly, and replication is a hit or miss affair, Replication in Psychology?.

There are literally thousands of examples of peer review as preached not being realized in practice. Yet every journal in the humanities and sciences and conferences for both, continue to practice and swear by peer review, in the face of known evidence to the contrary.

Dr. Grimes fails to account for maintenance of the peer review conspiracy, one of the most recent outrages being falsification of research results is not misconduct, Pressure on controversial nanoparticle paper builds.

How is it that both the conspiracy and the contrary facts are revealed over and over again, yet the conspiracy attracts new adherents every year?

BTW, the conspiracy against citizens of the United States and the world continues, despite the revelations of Edward Snowden.

Perhaps revelation isn’t “failure” for a conspiracy but simply another stage in its life-cycle?

You can see this work in full at: David Robert Grimes. On the Viability of Conspiratorial Beliefs. PLOS ONE, 2016; 11 (1): e0147905 DOI: 10.1371/journal.pone.0147905.

January 27, 2016

OUCH!

Filed under: Cybersecurity,Security — Patrick Durusau @ 9:51 pm

OUCH! Security Awareness Newsletter

From the post:

Wow! This is the first security awareness document that our users really like! Thank you, SANS”

That note came from the CISO of an 8,000 employee organization. OUCH! is the world’s leading, free security awareness newsletter designed for the common computer user. Published every month and in multiple languages, each edition is carefully researched and developed by the SANS Securing The Human team, SANS instructor subject matter experts and team members of the community. Each issue focuses on and explains a specific topic and actionable steps people can take to protect themselves, their family and their organization. OUCH! is distributed under the Creative Commons BY-NC-ND 4.0 license. You are encouraged to distribute OUCH! within your organization or share with family and friends, the only limitation is you cannot modify nor sell OUCH!.

The OUCH! newsletter and all of its translations are done by community volunteers. As such, some languages may not be available upon initial publication date, but will be added as soon as they are. Be sure to review our other free resources for security awareness programs such as presentations, posters and planning materials on our Resources Page.

You probably won’t benefit from this but may know users who will. Fairly commonplace advice.

You can subscribe to the newsletter but must have a SANS account.

Be aware your users may compare your password requirements with those at SANS:

Passwords must be at least 10 characters long and contain 5 unique characters

Passwords must also include at least one of each of the following: number, uppercase letter, lowercase letter, and special character ( ! £ $ % ^ & * ( ) @ # ? < > . )

I suppose that helps but you remember Schneier’s first book on cryptography? In the introduction he says there are two kinds of cryptography. The first kind is the sort that keeps your kid sister from reading your diary. The second kind stumps government sourced agencies. This book is about the latter. Words to that effect.

The SANS passwords will keep your kid sister out, at least to middle school, maybe.

PS: Best name for a newsletter I have seen in a long time. Suggestions for a topic map newsletter name?

Another Victory For Peer Review – NOT! Cowardly Science

Filed under: Chemistry,Peer Review,Science — Patrick Durusau @ 9:35 pm

Pressure on controversial nanoparticle paper builds by Anthony King.

From the post:

The journal Science has posted an expression of concern over a controversial 2004 paper on the synthesis of palladium nanoparticles, highlighting serious problems with the work. This follows an investigation by the US funding body the National Science Foundation (NSF), which decided that the authors had falsified research data in the paper, which reported that crystalline palladium nanoparticle growth could be mediated by RNA.1 The NSF’s 2013 report on the issue, and a letter of reprimand from May last year, were recently brought into the open by a newspaper article.

The chief operating officer of the NSF identified ‘an absence of care, if not sloppiness, and most certainly a departure from accepted practices’. Recommended actions included sending letters of reprimand, requiring the subjects contact the journal to make a correction and barring the two chemists from serving as a peer reviewer, adviser or consultant for the NSF for three years.

Science notes that, though the ‘NSF did not find that the authors’ actions constituted misconduct, it nonetheless concluded that there “were significant departures from research practice”.’ The NSF report noted it would no longer fund the paper’s senior authors chemists Daniel Feldheim and Bruce Eaton at the University of Colorado, Boulder, who ‘recklessly falsified research data’, unless they ‘take specific actions to address issues’ in the 2004 paper. Science said it is working with the two authors ‘to understand their response to the NSF final ruling’.

Feldheim and Eaton have been under scrutiny since 2008, when an investigation by their former employer North Carolina State University, US, concluded the 2004 paper contained falsified data. According to Retraction Watch, Science said it would retract the paper as soon as possible.

I’m not a subscriber to Science, unfortunately, but if you are, can you write to Marcia McNutt, Editor-in-Chief to ask why findings of “recklessly falsified research data,” merits an expression of concern?

What’s with that? Concern?

In many parts of the United States, you can be murdered with impunity for DWB, Driving While Black, but you can falsify research data and only merit an expression of “concern” from Science?

Not to mention that the NSF doesn’t think that falsifying research evidence is “misconduct.”

The NSF needs to document what it thinks “misconduct” means. I don’t think it means what they think it means.

Every profession has bad apples but what is amazing in this case is the public kid glove handling of known falsifiers of evidence.

What is required for a swift and effective response against scientific misconduct?

Vivisection of human babies?

Or would that only count if they failed to have a petty cash account and to reconcile it on a monthly basis?

Knowing where to look: Sources of imagery for geolocation

Filed under: Journalism,News,Reporting,Verification — Patrick Durusau @ 9:07 pm

Knowing where to look: Sources of imagery for geolocation by Eliot Higgins.

From the post:

With geolocation playing a core role in the verification of images, one key part of the process is finding reference information to help confirm the location of the image in question.

As recently covered on First Draft, satellite imagery from Google Earth and other providers can play an essential role in the geolocation of images. But they are not the only sources of information for corroborating material that can help you figure out where a picture or video was taken.

The resources Eliot covers need to be on your internal verification homepage. One-click away from your immediate verification need.

I really like his phrase, “knowing where to look.”

That resonates with so many topic map themes.

January 26, 2016

New ways to stay informed about presidential politics (Google + Fox?)

Filed under: News,Politics,Reporting — Patrick Durusau @ 4:54 pm

New ways to stay informed about presidential politics.

From the post:

In just two days, Americans will tune in for the final Republican debate before the 2016 primary season officially kicks off in Iowa, and we’re teaming up with Fox News Channel to make sure every citizen can get the most out of it. To help people get informed before heading to the polls, we’re integrating three new components into the debate: a way to hear directly from candidates on Google; real-time Google Trends data; and questions from some of YouTube’s most prominent voices.

At first I thought this was a sick joke, given the “.be” domain extension. But using better known, https://googleblog.blogspot.com/, it turns out to be genuine.

What threw me was the idea of being “informed” being paired with “Google + Fox.” That’s what I hope the Stanford SNLI corpus classifies as a contradiction.

The three services are:

  • “…publishing long-form text, photos and videos throughout the debate, campaigns can now give extended responses, answer questions they didn’t get a chance to on stage, and rebut their opponents.”
  • “…key insights from Google trends…”
  • three YouTube content creators will ask the candidates a question

To summarize, you will be “informed” by:

  • Longer repetition of semantically null statements by the candidates
  • Timely trend information of dubious value
  • People possibly less informed than you asking questions

Fox’s involvement, given its emphasis on entertainment as opposed to useful and/or factual news reporting, is a given.

What is surprising is that Google is a voluntary shill to this sideshow.

If you watch the Republican Presidential debate, early card or the main event, you will be dumber for having seen it.

Collecting Case Data (law)

Filed under: Journalism,Law,Law - Sources,News,Reporting — Patrick Durusau @ 3:32 pm

If you do any amount of legal research, a form for briefing cases can save you from forgetting the citation to a case with the perfect quote.

Everyone has a different style for case briefs but Mr. K– (@kirschsubjudice), has created one at Google Forms, called imaginatively enough: Case Brief.

It will seem like a lot of work at first but reviewing your case briefs will save lots of time over re-reading photocopies of decisions and/or pulling all the volumes, again, when fact checking your story.

Undocumented Admin Access – Backdoor or Feature? – You Decide

Filed under: Cybersecurity,Security — Patrick Durusau @ 11:43 am

Adrian Bridgwater uncovers a cybersecurity version of three card monte in Fortinet on SSH vulnerabilities: look, this really isn’t a backdoor, honest.

Fortinet created an undocumented method to communicate with FortiManager devices. Or in Fortinet’s own security warning:

An undocumented account used for communication with authorized FortiManager devices exists on some versions of FortiOS, FortiAnalyzer, FortiSwitch and FortiCache.

On vulnerable versions, and provided “Administrative Access” is enabled for SSH, this account can be used to log in via SSH in Interactive-Keyboard mode, using a password shared across all devices. It gives access to a CLI console with administrative rights.

In an update to previous attempts at obfuscation, Fortinet says:

As previously stated, this vulnerability is an unintentional consequence of a feature that was designed with the intent of providing seamless access from an authorized FortiManager to registered FortiGate devices. It is important to note, this is not a case of a malicious backdoor implemented to grant unauthorized user access.

Even with a generous reading, Fortinet created a “feature” that benefited only Fortinet, did not disclose it to its customers and that “feature” lessened the security of those customers.

If “backdoor” is limited to malicious third parties, perhaps we should call this a “designed security defect” by a manipulative first party.

January 25, 2016

Stanford NLP Blog – First Post

Filed under: Machine Learning,Natural Language Processing — Patrick Durusau @ 8:37 pm

Sam Bowman posted The Stanford NLI Corpus Revisited today at the Stanford NLP blog.

From the post:

Last September at EMNLP 2015, we released the Stanford Natural Language Inference (SNLI) Corpus. We’re still excitedly working to build bigger and better machine learning models to use it to its full potential, and we sense that we’re not alone, so we’re using the launch of the lab’s new website to share a bit of what we’ve learned about the corpus over the last few months.

What is SNLI?

SNLI is a collection of about half a million natural language inference (NLI) problems. Each problem is a pair of sentences, a premise and a hypothesis, labeled (by hand) with one of three labels: entailment, contradiction, or neutral. An NLI model is a model that attempts to infer the correct label based on the two sentences.

A high level overview of the SNLI corpus.

The news of Marvin Minsky‘s death, today, much have arrived too late for inclusion in the post.

Addressing The Concerns Of The Selfish

Filed under: Open Access,Open Data — Patrick Durusau @ 8:22 pm

A burnt hand didn’t teach any lessons to Dr. Jeffrey M. Drazen of the New England Journal of Medicine (NEJM).

Just last week Jeffrey and a co-conspirator took to the editorial page of the NEJM to denounce as “parasites,” scientists who reuse data developed by others. Especially, if the data developers weren’t included in the new work. See: Parasitic Re-use of Data? Institutionalizing Toadyism.

Overly sensitive, as protectors of greedy people tend to be, Jeffrey takes back to the editorial page to say:

In the process of formulating our policy, we spoke to clinical trialists around the world. Many were concerned that data sharing would require them to commit scarce resources with little direct benefit. Some of them spoke pejoratively in describing data scientists who analyze the data of others.3 To make data sharing successful, it is important to acknowledge and air those concerns.(Data Sharing and The Journal)

On target with concerns about data sharing requiring “…scarce resources with little direct benefit.”

Except Jeffrey forgot to mention that in his editorial about “parasites.”

Not a single word. The “cost free” myth of sharing data persists and the NEJM’s voice could be an important one in dispelling that myth.

But not Jeffrey, he took up his lance to defend the concerns of the selfish.

I will post separately on the issue of the cost of data sharing, etc., which as I say, is a legitimate concern.

We don’t need to resort to toadyism to satisfy the concerns of scientists over re-use of their data.

Create all the needed mechanisms to compensate for the sharing of data and if anyone objects or has “concerns” about re-use of data, cease funding them and/or any project of which they are a member.

There is no right to public funding for research, especially for scientists who have developed a sense of entitlement to public funding, for their own benefit.

You might want to compare the NEJM position to that of the radio astronomy community which shares both raw and processed data with anyone who wants to download it.

It’s a question of “privilege,” and not public safety, etc.

It’s annoying enough that people are selfish with research data, don’t be dishonest as well.

Amazon Top 20 Books in Data Mining – 18? Low Quality Listicle?

Filed under: Books,Data Mining — Patrick Durusau @ 5:19 pm

Amazon Top 20 Books in Data Mining by Matthew Mayo.

Matthew’s bio says:

Bio: Matthew Mayo is a computer science graduate student currently working on his thesis parallelizing machine learning algorithms. He is also a student of data mining, a data enthusiast, and an aspiring machine learning scientist.

So, puzzle me this:

  • Why does this listicle have “Data Science From Scratch: First Principles with Python” by Joel Grus, listed twice?
  • Why does David Pogue’s “iPhone: The Missing Manual” appear in this list?

“Data Science From Scratch: First Principles with Python” appears twice because one is paperback and the other is Kindle. Amazon treats those as separate subjects for sales purposes, although to a reader they are more likely a single subject, which has several formats.

The appearance of “iPhone: The Missing Manual” in this listing is a category error.

If you want to generate unproofed listicles of bestsellers, start with the Amazon best http://www.amazon.com/Best-Sellers-Books-Computers-Technology/zgbs/books/5/ref=zg_bs_unv_b_2_549646_1seller link for computer science or choose one of its many sub-categories such as data mining.

The measure of a listicle isn’t how easy it was to generate but how useful it is to the targeted community.

Duplication and irrelevant results detract from the usefulness of a listicle.

Yes?

January 24, 2016

A Comprehensive Guide to Google Search Operators

Filed under: News,Reporting,Search Engines,Search Interface,Searching — Patrick Durusau @ 5:25 pm

A Comprehensive Guide to Google Search Operators by Marcela De Vivo.

From the post:

Google is, beyond question, the most utilized and highest performing search engine on the web. However, most of the users who utilize Google do not maximize their potential for getting the most accurate results from their searches.

By using Google Search Operators, you can find exactly what you are looking for quickly and effectively just by changing what you input into the search bar.

If you are searching for something simple on Google like [Funny cats] or [Francis Ford Coppola Movies] there is no need to use search operators. Google will return the results you are looking for effectively no matter how you input the words.

Note: Throughout this article whatever is in between these brackets [ ] is what is being typed into Google.

When [Francis Ford Coppola Movies] is typed into Google, Google reads the query as Francis AND Ford AND Coppola AND Movies. So Google will return pages that have all those words in them, with the most relevant pages appearing first. Which is fine when you’re searching for very broad things, but what if you’re trying to find something specific?

What happens when you’re trying to find a report on the revenue and statistics from the United States National Park System in 1995 from a reliable source, and no using Wikipedia.

I can’t say that Marcela’s guide is comprehensive for Google in 2016, because I am guessing the post was written in 2013. Hard to say if early or late 2013 without more research than I am willing donate. Dating posts makes it easy for readers to spot current or past-use-date information.

For the information that is present, this is a great presentation and list of operators.

One way to use this post is to work through every example but use terms from your domain.

If you are mining the web for news reporting, compete against yourself on successive stories or within a small group.

Great resource for creating a search worksheet for classes.

Introducing d3-scale

Filed under: D3,Graphics,Visualization — Patrick Durusau @ 4:33 pm

Introducing d3-scale by Mike Bostock.

From the post:

I’d like D3 to become the standard library of data visualization: not just a tool you use directly to visualize data by writing code, but also a suite of tools that underpin more powerful software.

To this end, D3 espouses abstractions that are useful for any visualization application and rejects the tyranny of charts.

…(emphasis in original)

Quoting from both Leland Wilkinson (The Grammar of Graphics) and Jacques Bertin (Semiology of Graphics, Mike says D3 should be used for ordinal and categorical dimensions, in addition to real numbers.

Much as been done to expand the capabilities of D3 but it remains up to you to expand the usage of D3 in new and innovative ways.

I suspect you can already duplicate the images (most of them anyway) from the Semiology of Graphics, for example, but that isn’t the same as choosing a graphic and scale that will present information usefully to a user.

Much is left to be done but Mike has given D3 a push in the right direction.

Will you be pushing along side him?

SnowCrew: Volunteer to Help Your Neighbors [Issue Tracking For Snow Shoveling]

Filed under: Interface Research/Design — Patrick Durusau @ 11:17 am

SnowCrew: Volunteer to Help Your Neighbors

From the post:

Here’s how to see who needs help shoveling near you:
  1. Zoom into the map on the left (below on mobile) to where you live or want to help shovel
  2. When you locate someone nearby, click on the issue for more information
  3. Click on the link on this issue to be taken to the issue on SeeClickFix
  4. While on the issue in SeeClickFix, leave a comment to let the person who requested help, and other volunteers know you are heading over to help.
  5. When you are done, go back to the issue and close it so the person who made the request and other volunteers know it is complete.
  6. Give yourself a Hi5 for being an awesome neighbor!

Disclaimer: By volunteering, you do so at your own risk.

A great illustration of a simple interface.

Compare and contrast with topic map interfaces where an errant select or keystroke, opens up new, possibly duplicated options.

If our “working memory” can only hold up to 7 items, what is the result of inflicting more seven options on users?

Pay attention to the next time you use a complex application, like a word processor or spreadsheet. Some people do quite complex operations with them but day to day, how many options do you use?

Certainly, a large number of options are available, when you need them, but how many do you use day to day?

I’ll tell you mine: open, close, save, search/replace, copy, paste, insert and I use what has been described as a “thermonuclear word processor.” 😉

It has more options than MS Word but I don’t have to use them unless needed.

That’s the trick isn’t it? To expose users to the options they need, but only when needed and not before.

A topic map interface that requires me to choose between Berkeley and Hume on causation (assuming I remember the arguments clearly), isn’t going to be popular or terribly useful.

Institutional Dementia At Big Blue?

Filed under: IoT - Internet of Things,Project Management — Patrick Durusau @ 10:24 am

Why over two-thirds of the Internet of Things projects will fail by Sushil Pramanick (Associate Partner, Consultative Sales, IoT Leader, IBM Analytics).

From the post:

When did you first become interested in the Internet of Things (IoT)? If you’re like me, you’ve probably been following the news related to the IoT for years. As technology lovers, I’ll bet we have a lot in common. We are intensely curious. We are problem-solvers, inventors and perhaps more than anything else, we are relentlessly dedicated to finding better answers to our everyday challenges. The IoT represents a chance for us—the thinkers—to move far beyond the limiting technologies of the past and to unlock new value, new insights and new opportunities.

In mid-2005, Gartner stated that over 50 percent of data warehouse projects failed due to lack of adoption with data quality issues and implementation failures. In 2012, this metric was further scaled back to fewer than 30 percent. The parallelism here is that the Internet of Things hype is similar to data warehouse and business intelligence hype two decades ago when many companies embarked on decentralized reporting and/or basic analytics solutions. The problem was that some companies tried to build in-house, large enterprise data warehouse platforms that were disconnected and inherently had integration and data quality issues. A decade later, 50 percent of these projects failed. Another decade later, another over 20 percent failed. Similarly, companies are now trying to embark on Internet of Things initiatives using very narrow, point-focused solutions with very little enterprise IoT strategy in place, and in some cases, engaging or building unproven solution architectures.

Project failure rates are hardly news. But I mention this to illustrate the failure of institutional memory at IBM.

It wasn’t that many years ago (2008) that IBM published a forty-eight page white paper, Making Change Work, that covers the same ground as Sushil Pramanick.

Do you think think “Consultative Sales, IBM Analytics” doesn’t talk to “IBM Global Business Services?”

Or is IBM’s institutional memory broken up by projects, departments, divisions, and communicated in part by formal documents but also by folklore, rumor and water fountain gossip?

A faulty institutional memory, with missed opportunities, duplicated projects, and a general failure to thrive, won’t threaten the existence of an IBM. At least not right away.

Can you say the same for your organization?

Topic maps can help your organization avoid institutional dementia.

Interested?

Searching For Sleeping Children? (IoT)

Filed under: Cybersecurity,IoT - Internet of Things,Security — Patrick Durusau @ 7:35 am

Internet of Things security is so bad, there’s a search engine for sleeping kids by J.M. Porup.

From the post:

Shodan, a search engine for the Internet of Things (IoT), recently launched a new section that lets users easily browse vulnerable webcams.

The feed includes images of marijuana plantations, back rooms of banks, children, kitchens, living rooms, garages, front gardens, back gardens, ski slopes, swimming pools, colleges and schools, laboratories, and cash register cameras in retail stores, according to Dan Tentler, a security researcher who has spent several years investigating webcam security.

“It’s all over the place,” he told Ars Technica UK. “Practically everything you can think of.”

We did a quick search and turned up some alarming results:
….

Just so you know, the images from webcams are a premium feature of Shodan.

As the insecure IoT continues to spread, coupling the latest face recognition software with webcam feeds and public image databases could be a viable service. Early warning for those seeking to avoid detection and video evidence for hoping for it.

Similar to detective agencies but on a web scale.

There are less obvious digital cameras with IP than the ones from Wowwee:

wowwee

But most of them still scream “digital camera” in white with obvious lens, etc.

Porup reports that the FTC is attempting to be proactive about webcam security but penalties after a substantial number of insecure webcams appear won’t help those already exposed on the Internet.

Older Posts »

Powered by WordPress