Google Study: Most Security Questions Easy To Hack [+ security insight about Google]

July 7th, 2015

Google Study: Most Security Questions Easy To Hack by Shirley Siluk.

From the post:

There’s a big problem with the security questions often used to help people log into Web sites, or remember or access lost passwords — questions with answers that are easy to remember are also easy for hackers to guess. That’s the key finding of a study that Google recently presented at the International World Wide Web Conference in Florence, Italy.

Google said it analyzed hundreds of millions of secret questions and answers that users had employed to recover access to their accounts. It then calculated how easily hackers could guess the answers to those questions.

In many cases, the answers were relatively easy to hit upon because of unique cultural factors, according to the study. For English speakers, for example, hackers had a 19.7 percent chance of guessing — in just one guess — the right answer to the question, “What is your favorite food?” (Answer: pizza.)

‘Neither Secure nor Reliable’

Google undertook the study because, “despite the prevalence of security questions, their safety and effectiveness have rarely been studied in depth,” noted Anti-Abuse Research Lead Elie Bursztein and Software Engineer Ilan Caron. The conclusion reached after looking at all those millions of questions and answers? “(S)ecret questions are neither secure nor reliable enough to be used as a standalone account recovery mechanism,” Bursztein and Caron said Thursday in a post on Google’s Online Security Blog.

Shirley goes on to give examples of how the answers to some security questions are culturally determined but also quotes suggestions for making your answers to secret questions more secure.

What is the one insight into Google security can you draw from this article?

Google stored the answers to secret questions as clear text.

Yes?

Otherwise, how did they develop the statistics about secret answer usage?

Another answer isn’t clear from: Secrets, Lies, and Account Recovery: Lessons from the Use of Personal Knowledge Questions at Google by Joseph Bonneau, Elie Bursztein, Ilan Caron, Rob Jackson, and Mike Williamson.

Abstract:

We examine the first large real-world data set on personal knowledge question’s security and memorability from their deployment at Google. Our analysis confirms that secret questions generally offer a security level that is far lower than user-chosen passwords. It turns out to be even lower than proxies such as the real distribution of surnames in the population would indicate. Surprisingly, we found that a significant cause of this insecurity is that users often don’t answer truthfully. A user survey we conducted revealed that a significant fraction of users (37%) who admitted to providing fake answers did so in an attempt to make them “harder to guess” although on aggregate this behavior had the opposite effect as people “harden” their answers in a predictable way.

On the usability side, we show that secret answers have surprisingly poor memorability despite the assumption that reliability motivates their continued deployment. From millions of account recovery attempts we observed a significant fraction of users (e.g 40% of our English-speaking US users) were unable to recall their answers when needed. This is lower than the success rate of alternative recovery mechanisms such as SMS reset codes (over 80%).

Comparing question strength and memorability reveals that the questions that are potentially the most secure (e.g what is your first phone number) are also the ones with the worst memorability. We conclude that it appears next to impossible to find secret questions that are both secure and memorable. Secret questions continue have some use when combined with other signals, but they should not be used alone and best practice should favor more reliable alternatives.

Google has moved on to more secure methods for account recovery but the existence of the secret answer data, even from 2013, remains a danger for some users on the Internet.

Ancient [?] Craft of Information Visualization

July 7th, 2015

Vintage Infodesign [125]: More examples of the ancient craft of information visualization by Tiago Veloso.

From the post:

To open this week’s edition of Vintage InfoDesign, we picked some of the maps published in the 1800s/early 1900’s about the Battle of Waterloo . As we showed you before, on June 18th several newspapers marked with stunning pieces of infographic design the 200th anniversary of Napoleon’s final attempt to rule Europe, and since we haven’t feature any “oldies” related to this topic, we thought it would be interesting to do some Internet “digging”.

Hope you enjoy our findings, and feel free to leave the links to other charts and maps about Waterloo, in the comments section.

I’m not entirely comfortable with using the term “ancient” to describe maps depicting the Battle of Waterloo. I think of the fall of the New Kingdom of Egypt, in about 343 BCE as the beginning of “ancient” history.

What Lies Beneath: A Deep Dive into Clojure’s data structures

July 7th, 2015

What Lies Beneath: A Deep Dive into Clojure’s data structures by Mohit Thatte. (slides)

From the description:

Immutable, persistent data structures are at the heart of Clojure’s philosophy. It is instructive to see how these are implemented, to appreciate the trade-offs between persistence and performance. Lets explore the key ideas that led to effective, practical implementations of these data structures. There will be animations that should help clarify key concepts!

Video: Running time a little over thirty-five minutes (don’t leave for coffee).

Closes with a great reading list.

You may also want to review: Purely functional data structures demystified (slides).

Description:

I spoke about ‘Purely Functional Data structures’ at Functional Conference 2014 in Bangalore. These are my slides with an extra section on further study.

This talk is based on Chris Okasaki’s book title Purely Functional Data Structures. The gist is that immutable and persistent data structures can be designed without sacrificing performance.

Computer trivia question from the first video: Why are the colors red and black used for red and black trees?

Today’s Special on Universal Languages

July 7th, 2015

I have often wondered about the fate of the Loglan project, but never seriously enough to track down any potential successor.

Today I encountered a link to Lojban, which is described by Wikipedia as follows:

Lojban (pronounced [ˈloʒban] is a constructed, syntactically unambiguous human language based on predicate logic, succeeding the Loglan project. The name “Lojban” is a compound formed from loj and ban, which are short forms of logji (logic) and bangu (language).

The Logical Language Group (LLG) began developing Lojban in 1987. The LLG sought to realize Loglan’s purposes, and further improve the language by making it more usable and freely available (as indicated by its official full English title, “Lojban: A Realization of Loglan”). After a long initial period of debating and testing, the baseline was completed in 1997, and published as The Complete Lojban Language. In an interview in 2010 with the New York Times, Arika Okrent, the author of In the Land of Invented Languages, stated: “The constructed language with the most complete grammar is probably Lojban—a language created to reflect the principles of logic.”

Lojban was developed to be a worldlang; to ensure that the gismu (root words) of the language sound familiar to people from diverse linguistic backgrounds, they were based on the six most widely spoken languages as of 1987—Mandarin, English, Hindi, Spanish, Russian, and Arabic. Lojban has also taken components from other constructed languages, notably the set of evidential indicators from Láadan.

I mention this just in case someone proposes to you than a universal language would increase communication and decrease ambiguity, resulting in better, more accurate communication in all fields.

Yes, yes it would. And several already exist. Including Lojban. Their language can take its place along side other universal languages, i.e., it can increase the number of languages that make up the present matrix of semantic confusion.

In case you know, what part of: New languages increase the potential for semantic confusion, seems unclear?

Google search poisoning – old dogs learn new tricks

July 7th, 2015

Google search poisoning – old dogs learn new tricks by Dmitry Samosseiko.

From the post:

These days, every company knows that having its website appear at the top of Google’s results for relevant keyword searches makes a big difference in traffic and helps the business. Numerous search engine optimization (SEO) techniques have existed for years and provided marketers with ways to climb up the PageRank ladder.

In a nutshell, to be popular with Google, your website has to provide content relevant to specific search keywords and also to be linked to by a high number of reputable and relevant sites. (These act as recommendations, and are rather confusingly known as “back links,” even though it’s not your site that is doing the linking.)

Google’s algorithms are much more complex than this simple description, but most of the optimization techniques still revolve around those two goals. Many of the optimization techniques that are being used are legitimate, ethical and approved by Google and other search providers. But there are also other, and at times more effective, tricks that rely on various forms of internet abuse, with attempts to fool Google’s algorithms through forgery, spam and even hacking.

One of the techniques used to mislead Google’s page indexer is known as cloaking. A few days ago, we identified what we believe is a new type of cloaking that appears to work very well in bypassing Google’s defense algorithms.

Dmitry reports that Google was notified of this new form of cloaking so it may be work for much longer.

I first read about this in Notes from SophosLabs: Poisoning Google search results and getting away with it by Paul Ducklin.

I’m not sure I would characterize this as “poisoning Google search.” Altering a Google search result to be sure but poisoning implies that standard Google search results represent some “standard” of search results. Google search results are the outcome of undisclosed algorithms run on undisclosed content, subject to undisclosed processing of the scores from processing content with algorithms, and output with more undisclosed processing of the results.

Just putting it into large containers, I see four large boxes of undisclosed algorithms and content, all of which impact the results presented as Google Search results. Are Google Search results the standard output from four or more undisclosed processing steps of unknown complexity?

That doesn’t sound like much of a standard to me.

You?

Which Functor Do You Mean?

July 6th, 2015

Peteris Krumins calls attention to the classic confusion of names that topic maps address in On Functors.

From the post:

It’s interesting how the term “functor” means completely different things in various programming languages. Take C++ for example. Everyone who has mastered C++ knows that you call a class that implements operator() a functor. Now take Standard ML. In ML functors are mappings from structures to structures. Now Haskell. In Haskell functors are just homomorphisms over containers. And in Prolog functor means the atom at the start of a structure. They all are different. Let’s take a closer look at each one.

Peter has said twice in the first paragraph that each of these “functors” is different. Don’t rush to his 2010 post to point out they are different. That was the point of the post. Yes?

Exercise: All of these uses of functor could be scoped by language. What properties of each “functor” would you use to distinguish them beside their language of origin?

Twelve Tips for Getting Started With Data Journalism

July 6th, 2015

Twelve Tips for Getting Started With Data Journalism by Nils Mulvad and Helena Bengtsson.

No mention of Python or R, no instructions for No-SQL or SQL databases, no data cleaning exercises, and yet probably the best advice you will find for data journalism (or data science for that matter).

The essential insight of these twelve tips are that the meaning of the data, which implies answering “why does this matter?,” is the task of data journalism/science.

Anyone with sufficient help can generate graphs, produce charts, apply statistical techniques to data sets, but if it is all just technique, no one is going to care.

The twelve tips offered here are good for a daily read with your morning coffee!

Highly recommended!

Hacking Team Customers

July 6th, 2015

The recent Hacking Team hack generated a rash of self-righteous tweets about the company’s sales to “repressive” governments.

Before you get overly excited about the sins of the Hacking Team, consider this graphic of arms sales by the United States and Russia:

bi_graphics_usrussiaarmsrace-3

From US/Russia Arms Sales Race by Allan Smith and Skye Gould.

Buying arms is a good indication of the intent to repress someone so I don’t see many places that don’t have repressive governments.

Speaking of repression, this is the best visualization I have seen to date of the Greek debt crisis:

http://demonocracy.info/infographics/eu/debt_greek/debt_greek.html

Its a very large visualization so I won’t attempt to replicate it here.

The German government is trying to repress the Greek people for the foreseeable future in order to collect on its debt. Think of it as international loan sharking. If you don’t pay, we will break your legs. Or in this particular case, austerity measures that will blight the lives of millions. Sounds repressive to me.

We can debate repression one way or the other but the important resource for US citizens is: Office of Foreign Assets Control – Sanctions Programs and Information. Sanctions programs, well, carry sanctions for violating their terms. On you.

I have serious questions about the sanctions list both in terms of who is included and who is not. However, unless you have a large appetite for risk, you had best follow its guidance (or your government’s similar list).

Our Uncritical National Media

July 6th, 2015

FBI and Media Still Addicted to Ginning Up Terrorist Hysteria – But They Have Never Been Right by Adam Johnson is a stunning indictment of our national media as “uncritical” of goverment terrorist warnings.

I say “uncritical” because despite forty (40) false terrorist warning in a row, there has been no, repeat no terrorist attack in the United States related to those warnings. Not one.

The national media, say the New York Times of my youth, would have “broke” the news of a terrorist warning, but then it would have sought information to verify that warning. That is why is the government issuing a warning today and not yesterday, or next week?

Failing to find such evidence, which it would have in the past forty (40) cases, it would have pressed, investigated and mocked the government until its thin tissue of lies were plain for all to see.

How many times does a government source have to misrepresent facts before your report starts with:

Just in from the habitual liars at the Department of Homeland Security…

and includes a back story on how the Department of Homeland Security has never been right on one of its warnings, nor has its Transportation Safety Administration (TSA) ever caught a terrorist.

Instead, as Adam reports, this is what we get:

On Monday, several mainstream media outlets repeated the latest press release by the FBI that country was under a new “heightened terror alert” from “ISIL-inspired attacks” “leading up to the July 4th weekend.” One of the more sensational outlets, CNN, led with the breathless warning on several of its cable programs, complete with a special report by The Lead’s Jim Sciutto in primetime:

The threat was given extra credence when former CIA director—and consultant at DC PR firm Beacon Global Strategies—Michael Morell went on CBS This Morning (6/29/15) and scared the ever-living bejesus out of everyone by saying he “wouldn’t be surprised if we were sitting [in the studio] next week discussing an attack on the US.” The first piece of evidence Morell used to justify his apocalyptic posture, the “50 ISIS arrests,” was accompanied by a scary map on the CBS jumbotron showing “ISIS arrests” all throughout the US:

But one key detail is missing from this graphic: None of these “ISIS arrests” involved any actual members of ISIS, only members of the FBI—and their network of informants—posing as such. (The one exception being the man arrested in Arizona, who, while having no contact with ISIS, was also not prompted by the FBI.) So even if one thinks the threat of “lone wolf” attacks is a serious one, it cannot be said these are really “ISIS arrests.” Perhaps on some meta-level, it shows an increase of “radicalization,” but it’s impossible to distinguish between this and simply more aggressive sting operations by the FBI.

I would think that competent, enterprising reporters could have ferreted out all the material that Adam mentions in his post. They could have make the case for the groundless nature of the 4th of July security warning.

But no member of the national media did.

In the aftermath of yet another bogus terror warning, the national media should say why it dons pom-poms to promote every terror alert from the FBI or DHS, instead of serving the public’s interest with critical investigation of alleged terror threats.

New Android Malware Sample Found Every 18 Seconds

July 5th, 2015

More than 440K new Android malware strains found in Q1, study finds by Terri Robinson.

From the post:

More than 440,000 new strains of Android malware were discovered by security experts at G DATA analyzing data for the first quarter of 2015.

That the company’s Q1 2015 Mobile Malware Report found so many strains of malware, representing a 6.4 percent jump from the quarter before, is not surprising, considering half of U.S. consumers use a smartphone or tablet to do their banking and 78 percent of those on the Internet make purchases online, giving cybercriminals a large pool of potential victims as well as the opportunity for significant financial gain.

“Mobile banking has become a very profitable target of opportunity,” Andy Hayter, security evangelist at G DATA, told SCMagazine.com in an email correspondence. “With mobile banking applications being new, bad guys are taking advantage, and targeting these apps since the majority of those using them are unaware that you should protect your mobile device from malware.”

The uptick represents 4,900 new Android malware files each day of the quarter, up 400 files daily from those recorded in the second half of 2014. About 200 new malware samples were identified daily, meaning that a new malware sample was discovered every 18 seconds.

You know the problem. Apps want to work across multiple versions of Android and with third-party sites (like banks), which multiplies the number of security steps that have to be done right with each targeted interaction.

What are the odds of all those security steps being done right? With a new malware sample every 18 seconds, I would say the odds are heavily stacked against encountering a secure app. Could happen but on the order of the moon becoming a black hole, spontaneously.

Take the same security precautions with your smart phone as you would with any other network connected device. Keep your OS/apps updated on a regular basis. Off-load data not needed for immediate access. Data not on your phone can’t be stolen if your phone is compromised.

Our World in Data

July 4th, 2015

Our World in Data by Mike Roser.

Visualizations of War & Violence, Global Health, Africa, World Poverty and World Hunger & Food Provision.

An author chooses their time period but I find limiting the discussion of world poverty to the last 2,000 years problematic. Obtaining even projected data would be problematic but we know there were civilizations, particularly in the Ancient Near East and in Pre-Columbian America that had rather high standards of living. For that matter, for the time period given, the poverty map skips over the Roman Empire at its height, saying “we know that every country was extremely poor compared to modern living standards.”

The Romans had public bath houses, running water, roads that we still use today, public entertainment, libraries, etc. I am not sure how they were “extremely poor compared to modern living conditions.”

It is also problematic (slide 12) when Max says that:

Before modern economic growth the huge majority lived in extreme poverty and only a tiny elite enjoyed a better standard of living.

There are elites in every society that live better than most but that doesn’t automatically imply that over 84% to 94% of the world population was living in poverty. You don’t sustain a society such as the Aztecs or the Incas with only 6 to 16% of the population living outside poverty.

I am deeply doubtful of Max’s conclusion that in terms of poverty the world is becoming more “equal.”

Part of that skepticism is from being aware of statistics like:

“With less than 5 percent of world population, the U.S. uses one-third of the world’s paper, a quarter of the world’s oil, 23 percent of the coal, 27 percent of the aluminum, and 19 percent of the copper,” he reports. “Our per capita use of energy, metals, minerals, forest products, fish, grains, meat, and even fresh water dwarfs that of people living in the developing world.”
Use It and Lose It: The Outsize Effect of U.S. Consumption on the Environment

Considering that many of those resources are not renewable, there is a natural limit to how much improvement can or will take place outside of the United States. When renewable resources become more practical than they are today, they will only supplement the growing consumption of energy in the United States, not replace it.

Max provides access to his data sets if you are interested in exploring the data further. I would be extremely careful with his World Bank data because the World Bank does have an agenda to show the benefits of development across the world.

Considering the impact of consumption on the environment, the World Bank’s pursuit of a global consumption economy may be one of the more ill-fated schemes of all time.

If you are interested in this type of issue, the National Geographic’s Greendex may be of interest.

Sony Responsible For Sony Hack (say it’s not so!)

July 4th, 2015

Report claims the Sony cyberattack was pretty much all Sony’s fault by William Hughes.

From the post:

Last November, Sony Pictures Entertainment became the victim of one of the largest cyberattacks in U.S. history, with a group calling itself Guardians Of Peace infiltrating the company’s networks, stealing terabytes of data, and then wiping it from the system. The attack was a massive blow for the company, knocking its communication technology back to the fax machine, rendering it a public laughingstock, and ruining Tobey Maguire’s second life as enigmatic ramblin’ man Neil Deep. But now, six months and one fired co-chair later, the battered company might reasonably have come to the conclusion that things were finally cooling down. Sure, Julian Assange made news in April by posting all of the company’s stolen e-mails on a publicly searchable site for prurient perusal, but beyond that, it seemed that the worst was finally over.

But the worst is not over, it turns out, because six months is how long it took for Fortune magazine investigative reporter Peter Elkind to put together “Inside The Hack Of The Century,” a three-part examination of the company, and how its corporate culture contributed to the attack. Elkind apparently talked to more than 50 Sony employees about the hack, putting together a wide-ranging look at why Sony was such an alluring target for cybercrime.

An excellent series on the Sony Hack.

When you read Sony or others extolling the expertise of their attackers, keep this assessment in mind:

Ed Skoudis, a “white hat” hacker who teaches cyberdefense testing for corporate IT security professionals at the SANS Institute, says the skill level deployed at Sony looks “pretty average.” He puts its perpetrators on par with students in his mid-level classes. “It shows the defenses of Sony were not particularly good,” says Skoudis. “I didn’t see the bad guys jumping over any extreme hurdles, because there weren’t any extreme hurdles in place.” (in part 2)

After reading all three parts, ask yourself if the management at Sony sounds like your management?

One aspect of improving cybersecurity is improving management.

Good luck!

Generations of MBAs have labored mightily and the result was the management at Sony and Office of Personnel Management (OPM).

PS: It’s a tad early to call the Sony hack the “hack of the century.” What will we call someone taking over the air handling units and elevators in all of New York’s skyscrapers? Or disabling all cars of a particular brand? Or defrosting all the deep freezers in LA? All of that and more is coming, perhaps within the decade.

Stealing Laptop Crypto Keys At Conferences

July 4th, 2015

This Radio Bug Can Steal Laptop Crypto Keys, Fits Inside A Pita by Andy Greenberg.

From the post:

THE LIST OF paranoia-inducing threats to your computer’s security grows daily: Keyloggers, trojans, infected USB sticks, ransomware…and now the rogue falafel sandwich.

Researchers at Tel Aviv University and Israel’s Technion research institute have developed a new palm-sized device that can wirelessly steal data from a nearby laptop based on the radio waves leaked by its processor’s power use. Their spy bug, built for less than $300, is designed to allow anyone to “listen” to the accidental radio emanations of a computer’s electronics from 19 inches away and derive the user’s secret decryption keys, enabling the attacker to read their encrypted communications. And that device, described in a paper they’re presenting at the Workshop on Cryptographic Hardware and Embedded Systems in September, is both cheaper and more compact than similar attacks from the past—so small, in fact, that the Israeli researchers demonstrated it can fit inside a piece of pita bread.

“The result is that a computer that holds secrets can be readily tapped with such cheap and compact items without the user even knowing he or she is being monitored,” says Eran Tomer, a senior lecturer in computer science at Tel Aviv University. “We showed it’s not just possible, it’s easy to do with components you can find on eBay or even in your kitchen.”

Their key-stealing device, which they call the Portable Instrument for Trace Acquisition (yes, that spells PITA) consists of a loop of wire to act as an antenna, a Rikomagic controller chip, a Funcube software defined radio, and batteries. It can be configured to either collect its cache of stolen data on an SD storage card or to transmit it via Wifi to a remote eavesdropper. The idea to actually cloak the device in a pita—and name it as such—was a last minute addition, Tomer says. The researchers found a piece of the bread in their lab on the night before their deadline and discovered that all their electronics could fit inside it.

I was surprised by the comment:

The notion of someone planting an eavesdropping device less than two feet away from a target computer may seem farfetched as an espionage technique—even if that spy device is concealed in a pita (a potentially conspicuous object in certain contexts) or a stealthier disguise like a book or trashcan.

Really?

Andy must not attend many technical conferences. Here is a photo of one that I picked at random from Google images:

mac-conf

What looks like it is within 19 inches of each of the laptops you see in that photo? Is the bottom of the table on which the computers sit within 19 inches of each laptop? Did you check under the table at the last security conference you attended?

Or for that matter, the last national intelligence conference where clearance was required to attend sessions?

Like the device, the under the table technique is cheap, highly effective, difficult to discover (when was the last time you looked at the bottom of a conference table?), etc. Don’t worry, I’m not giving away too much, there are refinements to the general idea.

One key to breaching or preventing breaches of security is to look at the world differently.

If you are interested in a different view of the world, you know where to find me.

Hacking Wireless Ghosts Vulnerable For Years

July 4th, 2015

Hacking Wireless Ghosts Vulnerable For Years by Lucas Apa.

From the post:

Is the risk associated to a Remote Code Execution vulnerability in an industrial plant the same when it affects the human life? When calculating risk, certain variables and metrics are combined into equations that are rendered as static numbers, so that risk remediation efforts can be prioritized. But such calculations sometimes ignore the environmental metrics and rely exclusively on exploitability and impact. The practice of scoring vulnerabilities without auditing the potential for collateral damage could underestimate a cyber attack that affects human safety in an industrial plant and leads to catastrophic damage or loss. These deceiving scores are always attractive for attackers since lower-priority security issues are less likely to be resolved on time with a quality remediation.

In the last few years, the world has witnessed advanced cyber attacks against industrial components using complex and expensive malware engineering. Today the lack of entry points for hacking an isolated process inside an industrial plant mean that attacks require a combination of zero-day vulnerabilities and more money.

Two years ago, Carlos Mario Penagos (@binarymantis) and I (Lucas Apa) realized that the most valuable entry point for an attacker is in the air. Radio frequencies leak out of a plant’s perimeter through the high-power antennas that interconnect field devices. Communicating with the target devices from a distance is priceless because it allows an attack to be totally untraceable and frequently unstoppable.

In August 2013 at Black Hat Briefings, we reported multiple vulnerabilities in the industrial wireless products of three vendors and presented our findings. We censored vendor names from our paper to protect the customers who use these products, primarily nuclear, oil and gas, refining, petro-chemical, utility, and wastewater companies mostly based in North America, Latin America, India, and the Middle East (Bahrain, Kuwait, Oman, Qatar, Saudi Arabia, and UAE). These companies have trusted expensive but vulnerable wireless sensors to bridge the gap between the physical and digital worlds.

Another interesting summer project idea involving cybersecurity. Industrial control systems, the ones bed-wetters at the DHS worry about being hacked over the Internet?, well, they may be using insecure wireless devices. Not connected to the Internet but vulnerable all the same.

See the blog post and OleumTech™ Wireless Sensor Network devices for technical details.

Speaking of wireless devices, many cities now have automatic meter reading, which open up the potential to monitor utility usage of others and potentially over or under report usage to a central authority.

It would make an interesting map of a city to overlay a street map with the density of detected wireless devices.

I can only imagine what such a map would look like for petrochemical complex that runs along side the Mississippi River near Baton Rouge, Louisiana. For example, the ExxonMobil Baton Rouge Refinery:

800px-ExxonMobil_Baton_Rouge

That’s only one and as you follow the Mississippi down river, you will find Dow Chemical and a host of similar plants. I don’t know of any survey of wireless devices at these plants.

Islamic State (use the correct name)

July 4th, 2015

Hall: BBC will not call IS ‘Daesh’

From the post:

According to The Times, the director general has said that the broadcaster will not adopt the name ‘Daesh’ in place of IS as it was a ‘pejorative’ label used by enemies of the group, including Assad supporters in Syria.

Use of the name could be interpreted as support for those enemies, thereby damaging the BBC’s impartiality, the DG reasoned in his response to an open letter from 120 cross-party MPs.

In an age when media outlets self-censor themselves at the hint of government displeasure, Tony Hall‘s stand is a refreshing one.

The group calls itself, Islamic State and who better to know their name?

Reporters, bloggers and anyone who cares about non-partisan reporting (note, I did not say objective) should use the name, Islamic State in English language publications.

Feel free to use other terms, such as extremist, militant, terrorist, so long as you apply them equally to everyone. If you label one attack on a social gathering as a “terrorist attack,” then all attacks on social gatherings should be terrorist attacks. A car bomb is just as indiscriminate as a cruise missile or “smart” bomb.

Suggestion: Reporting on the Islamic State would be more balanced, if the Islamic State assisted journalists in seeing the results of attacks against it. It is hard for a Western public to understand civilians casualties other than through the lens of Western media.

The Best of SRCCON

July 3rd, 2015

Here are the best links, resources, and roundups from SRCCON, the conference for journalism code by Laura Hazard Owen.

From the post:

Business casual, bad coffee, dry prewritten speeches, and an 8 a.m. start with no time between sessions — these things are familiar to anyone who’s ever been to a conference. SRCCON (pronounced “Source-Con”), organized by Knight-Mozilla OpenNews and held over two days last week in Minneapolis, aims to avoid the bad conference stereotypes and offer, instead, interactive discussions about “the challenges that news technology and data teams encounter every day.”

There was an NPR-sponsored coffee station run by Manual Coffeemaker No. 1’s Craighton Berman, handcrafting individual cups of pour-over. There was on-site, full-day childcare included in the price of admission. There was local beer, non-alcoholic beer, halal meals for Ramadan, and unisex bathrooms. (There is a flip side to all this inclusiveness: SRCCON’s celebration of nerdery can feel as intimidating at first as the celebration of any other group identity that you don’t totally identify with.) The start time was 10 a.m. the first day and 11 a.m. the second. And with over 50 sessions, the conference’s 220 attendees discussed topics from ad viewability and machine learning to journalist burnout and remote work.

If that sounds cool to you, you are really going to like the collection of links to notes, slides and other resources that Laura has collected for SRCCON 2015.

Not to mention having you looking forward to SRCCON 2016, while you are developing your abilities with source code.

WorldWide Telescope to the Open Source .NET Universe

July 3rd, 2015

Welcoming the WorldWide Telescope to the Open Source .NET Universe by Martin Woodward.

From the post:

At the .NET Foundation we strive to put code into the hands of those who use it, in an effort to create an innovative and exciting community. Today we’re excited to announce that we are doing just that in welcoming the WorldWide Telescope to the exciting universe of open source .NET.

I did my undergraduate degree in physics at a time when the Hubble Space Telescope (HST) was a new thing. I remember very well my amazement when I could load up one of about 100 CD-ROM’s from the Digitized Sky Survey to get access to observations from the Palomar Observatory and then later the HST, and compare them with my own results to track changes in the night sky. CD-ROM’s were a new thing back then too, but I wrote some VB code to capture data out of the JPEG images in the Sky Survey and compare it with my own images from the CCD in the back of the telescope on the roof of the University of Durham Physics department.

Fast forward to 2008 and Microsoft Research moved Robert Scoble to tears and wowed the auidence at TED when it released the WorldWide Telescope, giving the public access to exactly the same type of raw astronomical data through an easy-to-use interface. The WorldWide Telescope application is great because it puts an incredible visualization engine together with some of the most interesting scientific data in the world into the hands of anyone. You can just explore the pretty pictures and zoom in as if you are seeing the universe on some of the best telescopes in the world – but you can also do real science with the same interface.  Astronomers and educators using WorldWide Telescope have come to appreciate and beauty and power of tooling that enables such rich data exploration – truly setting that data free.

Today, I am thrilled to announce that the .NET Foundation is working together with Microsoft Research and the WorldWide Telescope project team to set the application itself free. The code, written in .NET, is now available as an open source application under the MIT License on GitHub. We are very keen to help the team develop in the open and now that WorldWide Telescope is open source, any individual or organization will be able to adapt and extend the functionality of the application and services to meet their research or educational needs. Not only can they contribute those changes back to the wider community through a pull request, but they’ll allow others to build on their research and development. Extensions to the software will continuously enhance astronomical research, formal and informal learning, and public outreach, while also leveraging the power of the .NET ecosystem.

The WorldWide Telescope represents a new community coming to the Foundation. It’s also great that we now have representation within the foundation from a project that is a complex system that building on-top of the .NET Framework with both a desktop client, as well as extensive server based infrastructure. The WorldWide Telescope is an important tool and I’m glad the .NET Foundation can be of help as it begins its journey as an open source application with committers from inside and outside of Microsoft.  We’re thrilled to welcome the community of astronomers using and contributing to the WorldWide Telescope into the exciting universe of open source .NET.

You can read more about the WorldWide Telescope on the website and more about the move to open source on the Microsoft Research Connections blog. The WorldWide Telescope team also have a very cool video on YouTube showing the power of the WorldWide Telescope in action where you can also find a wealth of videos from the community.

Remind me to put a new version of Windows on a VM in my Ubuntu box. ;-)

Very cool!

Spot the Errors – VMware/Carasoft Pony Up $75.5 Million (but did no wrong)

July 2nd, 2015

VMware, Carahsoft Pay $75.5 Million To Settle Government Overcharging Lawsuit by Kevin McLaughlin.

From the post:

VMware and reseller partner Carahsoft have agreed to pay $75.5 million to settle a civil lawsuit alleging overcharging of the federal government for VMware products and services over a six-year period, the U.S. Department of Justice said in a news release Tuesday.

Read the rest of Kevin’s post and see if you can spot the errors in the story. There are at least two (2).

Take your time….

The first error is at the bottom of the first page of the article:

“VMware believes that its commercial sales practice disclosures to the GSA were accurate and denies that it violated the False Claims Act,” the spokesman said in an email. “[VMware] nevertheless elected to settle this lawsuit rather than engage in protracted litigation with one of its important customers – the federal government.”

The error isn’t in Kevin’s reporting but in the VMware statement: “…rather than engage in protracted litigation with one of its important customers – the federal government.”

That statement should read: “rather than engage in protracted litigation with one of its former customer – the federal government.”

How much confidence would you have with a supplier who attempted to cheat you once and even now is involved in questionable dealing in another contract? Army ELA: Weapon Of Mass Confusion? also by Kevin McLaughlin.

The first error being that the federal government didn’t forfeit any rights that VMware may have to its present or future versions of its software. Cheating the sovereign should be severely discouraged.

The second error, again not Kevin’s fault, you will notice the absence of names from the government, VMware and Carahsoft, of people who were involved in the overcharging incident.

Without accountability of the individuals involved in this sorry affair and no doubt hundreds if not thousands of others, defrauding the government will remain commonplace. If the purpose of government is to act as a big cookie jar for contractors, I suppose that is ok.

My personal preference is for an honest and relatively effective government, such as with cybersecurity, project management, etc. Just a personal opinion.

Introducing LegalPad [free editor]

July 2nd, 2015

Introducing LegalPad by Jake Heller.

From the webpage:

I’m thrilled to officially announce something we’ve been working on behind the scenes here at Casetext: LegalPad. It’s live on the site right now: you can use it, for free, and without registering. So before reading about it from me, I recommend checking it out for yourself!

A rethought writing experience

LegalPad is designed to be the best way to write commentary about the law.

This means a few things. First, we created a clean writing experience, easier to use than traditional blogging platforms. Editing is done through a simplified editor bar that is there only when you need it so you can stay focused on your writing.

Second, the writing experience is especially tailored towards legal writing in particular. Legal writing is hard. Because law is based on precedent and authority, you need to juggle dozens of primary sources and documents. And as you write, you’re constantly formatting, cite-checking, BlueBooking, editing, emailing versions for comments, and researching. All of this overhead distracts from the one thing you really want to focus on: perfecting your argument.

LegalPad was designed to help you focus on what matters and avoid unnecessary distractions. A sidebar enables you to quickly pull up bookmarks collected while doing research on Casetext. You can add a reference to the cases, statutes, regulations, or other posts you bookmarked, which are added with the correct citation and a hyperlink to the original source.

You can also pull up the full text of the items you’ve bookmarked in what we are calling the PocketCase. Not only does the PocketCase enable you to read the full text of the case you are writing about while you’re writing, you can also drop in quotes directly into the text. They’ll be correctly formatted, have the right citation, and even include the pincite to the page you’ve copied from.

LegalPad also has one final, very special feature. If your post cites to legal authority, it will be connected to the case, statute, or regulation you referenced such that next time someone reads the authority, they’ll be alerted to your commentary. This makes the world’s best free legal research platform an even better resource. It also helps you reach an audience of over 350,000 attorneys, in-house counsel, professors, law students, other legal professionals, and business leaders who use Casetext as a resource every month.

LegalPad and CaseNote are free so I signed up.

I am working on an annotation of Lamont v. Postmaster General 381 U.S. 301 (1965) to demonstrate it relevancy to FBI Director James Comey’s plan to track contacts with ISIS over social media.

A great deal of thought and effort has gone into this editing interface! I was particularly pleased by the quote insert with link back to the original material feature.

At first blush and with about fifteen (15) minutes of experience with the interface, I suspect that enhancing it with entity recognition and stock associations would not be that much of a leap. Could be very interesting.

More after I have written more text with it.

Collaborative Annotation for Scientific Data Discovery and Reuse [+ A Stumbling Block]

July 2nd, 2015

Collaborative Annotation for Scientific Data Discovery and Reuse by Kirk Borne.

From the post:

The enormous growth in scientific data repositories requires more meaningful indexing, classification and descriptive metadata in order to facilitate data discovery, reuse and understanding. Meaningful classification labels and metadata can be derived autonomously through machine intelligence or manually through human computation. Human computation is the application of human intelligence to solving problems that are either too complex or impossible for computers. For enormous data collections, a combination of machine and human computation approaches is required. Specifically, the assignment of meaningful tags (annotations) to each unique data granule is best achieved through collaborative participation of data providers, curators and end users to augment and validate the results derived from machine learning (data mining) classification algorithms. We see very successful implementations of this joint machine-human collaborative approach in citizen science projects such as Galaxy Zoo and the Zooniverse (http://zooniverse.org/).

In the current era of scientific information explosion, the big data avalanche is creating enormous challenges for the long-term curation of scientific data. In particular, the classic librarian activities of classification and indexing become insurmountable. Automated machine-based approaches (such as data mining) can help, but these methods only work well when the classification and indexing algorithms have good training sets. What happens when the data includes anomalous patterns or features that are not represented in the training collection? In such cases, human-supported classification and labeling become essential – humans are very good at pattern discovery, detection and recognition. When the data volumes reach astronomical levels, it becomes particularly useful, productive and educational to crowdsource the labeling (annotation) effort. The new data objects (and their associated tags) then become new training examples, added to the data mining training sets, thereby improving the accuracy and completeness of the machine-based algorithms.
….

Kirk goes onto say:

…it is incumbent upon science disciplines and research communities to develop common data models, taxonomies and ontologies.

Sigh, but we know from experience that has never worked. True, we can develop more common data models, taxonomies and ontologies, but they will be in addition to the present common data models, taxonomies and ontologies. Not to mention that developing knowledge is going to lead to future common data models, taxonomies and ontologies.

If you don’t believe me, take a look at: Library of Congress Subject Headings Tentative Monthly List 07 (July 17, 2015). These subject headings have not yet been approved but they are in addition to existing subject headings.

The most recent approved list: Library of Congress Subject Headings Monthly List 05 (May 18, 2015). For approved lists going back to 1997, see: Library of Congress Subject Headings (LCSH) Approved Lists.

Unless you are working in some incredibly static and sterile field, the basic terms that are found in “common data models, taxonomies and ontologies” are going to change over time.

The only sure bet in the area of knowledge and its classification is that change is coming.

But, Kirk is right, common data models, taxonomies and ontologies are useful. So how do we make them more useful in the face of constant change?

Why not use topics to model elements/terms of common data models, taxonomies and ontologies? Which would enable user to search across such elements/terms by the properties of those topics. Possibly discovering topics that represent the same subject under a different term or element.

Imagine working on an update of a common data model, taxonomy or ontology and not having to guess at the meaning of bare elements or terms? A wealth of information, including previous elements/terms for the same subject being present at each topic.

All of the benefits that Kirk claims would accrue, plus empowering users who only know previous common data models, taxonomies and ontologies, to say nothing of easing the transition to future common data models, taxonomies and ontologies.

Knowledge isn’t static. Our methodologies for knowledge classification should be as dynamic as the knowledge we seek to classify.

The Big Lie About the Islamic State of Iraq and Syria (ISIS) and Social Media

July 2nd, 2015

Jim Comey, ISIS, and “Going Dark” by Benjamin Wittes.

From the post:

FBI Director James Comey said Thursday his agency does not yet have the capabilities to limit ISIS attempts to recruit Americans through social media.

It is becoming increasingly apparent that Americans are gravitating toward the militant organization by engaging with ISIS online, Comey said, but he told reporters that “we don’t have the capability we need” to keep the “troubled minds” at home.

“Our job is to find needles in a nationwide haystack, needles that are increasingly invisible to us because of end-to-end encryption,” Comey said. “This is the ‘going dark’ problem in high definition.”

Comey said ISIS is increasingly communicating with Americans via mobile apps that are difficult for the FBI to decrypt. He also explained that he had to balance the desire to intercept the communication with broader privacy concerns.

“It is a really, really hard problem, but the collision that’s going on between important privacy concerns and public safety is significant enough that we have to figure out a way to solve it,” Comey said.

Let’s unpack this.

As has been widely reported, the FBI has been busy recently dealing with ISIS threats. There have been a bunch of arrests, both because ISIS has gotten extremely good at the inducing self-radicalization in disaffected souls worldwide using Twitter and because of the convergence of Ramadan and the run-up to the July 4 holiday.

Just as an empirical matter, phrases like, “…ISIS has gotten extremely good at…”, should be discarded as noise. You have heard of the three teenage girls from the UK who “attempted” to join the Islamic State of Iraq and Syria. Taking the “teenage” population of the UK to fall between 10 to 19 years of age, the UK teen population for 2014 was 7,667,000.

Three (3) teens from a population of 7,667,000 doesn’t sound like “…extremely good…” recruitment to me.

Moreover, unless they have amended the US Constitution quite recently, as a US citizen I am free to read publications by any organization on the face of the Earth. There are some minor exceptions for child pornography but political speech, which Islamic State of Iraq and Syria publications clearly fall under, are under the highest level of protection by the constitution.

Unlike the Big Lie statements about Islamic State of Iraq and Syria and social media, there is empirical research on the impact of surveillance on First Amendment rights:

This article brings First Amendment theory into conversation with social science research. The studies surveyed here show that surveillance has certain effects that directly implicate the theories behind the First Amendment, beyond merely causing people to stop speaking when they know they are being watched. Specifically, this article finds that social science research supports the protection of reader and viewer privacy under many of the theories used to justify First Amendment protection.

If the First Amendment serves to foster a marketplace of ideas, surveillance thwarts this purpose by preventing the development of minority ideas. Research indicates that surveillance more strongly affects those who do not yet hold strong views than those who do.

If the First Amendment serves to encourage democratic selfgovernance, surveillance thwarts this purpose as well. Surveillance discourages individuals with unformed ideas from deviating from majority political views. And if the First Amendment is intended to allow the fullest development of the autonomous self, surveillance interferes with autonomy. Surveillance encourages individuals to follow what they think others expect of them and conform to perceived norms instead of engaging in unhampered self-development.

The quote is from the introduction to: The Conforming Effect: First Amendment Implications of Surveillance, Beyond Chilling Speech by Margot E. Kaminski and Shane Witnov. (Kaminski, Margot E. and Witnov, Shane, The Conforming Effect: First Amendment Implications of Surveillance, Beyond Chilling Speech (January 1, 2015). University of Richmond Law Review, Vol. 49, 2015; Ohio State Public Law Working Paper No. 288. Available at SSRN: http://ssrn.com/abstract=2550385))

The abstract from Kaminski and Witnov reads:

First Amendment jurisprudence is wary not only of direct bans on speech, but of the chilling effect. A growing number of scholars have suggested that chilling arises from more than just a threat of overbroad enforcement — surveillance has a chilling effect on both speech and intellectual inquiries. Surveillance of intellectual habits, these scholars suggest, implicates First Amendment values. However, courts and legislatures have been divided in their understanding of the extent to which surveillance chills speech and thus causes First Amendment harms.

This article brings First Amendment theory into conversation with social psychology to show that not only is there empirical support for the idea that surveillance chills speech, but surveillance has additional consequences that implicate multiple theories of the First Amendment. We call these consequences “the conforming effect.” Surveillance causes individuals to conform their behavior to perceived group norms, even when they are unaware that they are conforming. Under multiple theories of the First Amendment — the marketplace of ideas, democratic self-governance, autonomy theory, and cultural democracy — these studies suggest that surveillance’s effects on speech are broad. Courts and legislatures should keep these effects in mind.

Conformity to the standard US line on Islamic State of Iraq and Syria is the more likely goal of FBI Director James Comey than stopping “successful” Islamic State of Iraq and Syria recruitment over social media.

The article also looks at First Amendment cases, including one that is directly on point for ISIS social media:

The Supreme Court has stated that laws that deter the expression of minority viewpoints by airing the identities of their holders are also unconstitutional. In Lamont, the Court found unconstitutional a requirement that mail recipients write in to request communist literature. The state had an impermissible role in identifying this minority viewpoint and condemning it. The Court reasoned that ― any addressee is likely to feel some inhibition in sending for literature which federal officials have condemned as ‘communist political propaganda.‘ The individual‘s inhibition stems from the state‘s obvious condemnation, but also from a fear of social repercussions (by the state as an employer). The Court found that the requirement that a person identify herself as a communist ―is almost certain to have a deterrent effect. (I omitted the footnote numbers for ease of reading.)

For more on Lamont (8-0), see: Lamont vs. Postmaster General (Wikipedia). Lamont v. Postmaster General 381 U.S. 301 (1965) Text of the decision, Justia.

The date of the decision is important as well, 1965. In 1965, China and Russia, had a total combined population of 842 million people. One assumes the vast majority of who were communists.

Despite the presence of a potential 843 million communists in the world, the Lamont Court found that even chilling access to communist literature was not permitted under the United States Constitution.

Before someone challenges my claim that social media has not been successful for the Islamic State of Iraq and Syria, remember that at best the Islamic State of Iraq and Syria has recruited 30,000 fighters from outside of Syria, with no hard evidence on whether they were motivated by social media or not.

Even assuming the 30,000 were all from social media, how does that compare to the 843 million known communists of 1965?

Why is Comey so frightened of a few thousand people? Frightened enough to to abridge the freedom of speech rights of every American and who knows what he wants to do to non-Americans.

As best I understand the goals of the Islamic State of Iraq and Syria, the overriding one is to have a Muslim government that is not subservient to Western powers. I don’t find that remotely threatening. Whether the Islamic State of Iraq and Syria will be the one to establish such a government is unclear. Governing is far more tedious and difficult than taking territory.

A truly Muslim government would be a far cry from the favoritism and intrigue that has characterized Western relationships with Muslim governments for centuries.

Citizens of the United States are in more danger from the FBI than they ever will be from members of the Islamic State of Iraq and Syria. Keep that in mind when you hear FBI Director James Comey talks about surveillance. The target of that surveillance is you.

HTTP Strict Transport Security

July 1st, 2015

Wikipedia summarizes HTTP Strict Transport Security as follows:

HTTP Strict Transport Security (HSTS) is a web security policy mechanism which is necessary to protect secure HTTPS websites against downgrade attacks, and which greatly simplifies protection against cookie hijacking. It allows web servers to declare that web browsers (or other complying user agents) should only interact with it using secure HTTPS connections, and never via the insecure HTTP protocol. HSTS is an IETF standards track protocol and is specified in RFC 6797.

The HSTS Policy is communicated by the server to the user agent via a HTTP response header field named “Strict-Transport-Security“. HSTS Policy specifies a period of time during which the user agent shall access the server in a secure-only fashion.

I mention that because Troy Hunt has posted: Understanding HTTP Strict Transport Security (HSTS) and preloading it into the browser.

It is a very deep and wonderful walk through the HTTP Strict Transport Security (HSTS) protocol.

Something for you night owls who are looking for something “technical” for the evening.

GCHQ has legal immunity to reverse-engineer…

July 1st, 2015

GCHQ has legal immunity to reverse-engineer Kaspersky antivirus, crypto by Glyn Moody.

From the post:

Newly-published documents from the Snowden trove show GCHQ asking for and obtaining special permission to infringe on the copyright of software programs that it wished to reverse-engineer for the purpose of compromising them. GCHQ wanted a warrant that would give it indemnity against legal action from the companies owning the software in the unlikely event that they ever found out.

The legal justification for this permission is dubious. As the new report in The Intercept explains: “GCHQ obtained its warrant under section 5 of the 1994 Intelligence Services Act [ISA], which covers interference with property and ‘wireless telegraphy’ by the Security Service (MI5), Secret Intelligence Service (MI6) and GCHQ.” Significantly, Section 5 of the ISA does not mention interference in abstractions like copyright, but in 2005 the intelligence services commissioner approved the activity anyway.

It is difficult to say if the de-legitimization of laws and government by intelligence agencies is a deliberate strategy or not.

Whether intended or not, it has become clear that no privacy right of citizens, the property rights of commercial entities and even the marketability of commercial software and services, have no meaning for the United States government.

Technology companies, enterprises of all types, citizens, etc., need to all unite to return government to its legitimate goals, one of which is respecting the rights of citizens, the property rights of enterprises, and the reputations of technology companies in the world wide market.

Of what use is a global market if US vendors are so distrusted, due to government interference with their products, that their market share dwindles?

GCHQ has availed itself of legal fictions much as the United States did with so-called torture memos. All involved should be aware that no regime reins forever.

Rush to implement Internet of Things ‘could undermine security’

July 1st, 2015

Rush to implement Internet of Things ‘could undermine security’ by Jane McCallion.

From the post:

Internet of Things (IoT) companies risk undermining the security of their own sector in the race to deploy new solutions, it has been claimed.

Speaking at a panel discussion hosted by Rackspace, Yodit Stanton, CEO and founder of OpenSensors.io, said that compared to standard devices that transmit and receive data, building security into IoT devices is “different, but not hard”.

“You have these tiny processors with not much memory so you can’t use keys, for example, but there are very good security chips,” said Stanton.

However, she added: “People just don’t use them enough, which I despair about. I think there is an element of ‘oh we’ll just deploy this thing and it’ll be fine’, because they don’t really think about the implications. The technology is there, but in the enthusiasm and in the rush of this new thing we are possibly neglecting [it].”

Do you think?

Just earlier today we were talking about how developers can’t use encryption libraries correctly. Remember the being asked to fly an airplane while they only had a driver’s license?

The people speculating in Jane’s post about security not being hard for IoT devices and the “potential” that security may be neglected, only have driver licenses when it comes to cybersecurity.

As far as potential EU data protection regulations, you do realize that protection regulations are developed to define non-liability for the regulated conduct? If you conform to the regulation, you get a free pass no matter how bad the damages.

The better course would be to invalidate EULAs on a products liability theory and hold manufacturers and others liable in court. I need to go back and read Posner and others, plus the contemporary literature on the legality of EULAs.

What do you think the result of adding the insecure IoT on top of the present insecure worldwide IT infrastructure? More insecurity?

Right in one!

Digital Data Repositories in Chemistry…

July 1st, 2015

Digital Data Repositories in Chemistry and Their Integration with Journals and Electronic Notebooks by Matthew J. Harvey, Nicholas J. Mason, Henry S. Rzepa.

Abtract:

We discuss the concept of recasting the data-rich scientific journal article into two components, a narrative and separate data components, each of which is assigned a persistent digital object identifier. Doing so allows each of these components to exist in an environment optimized for purpose. We make use of a poorly-known feature of the handle system for assigning persistent identifiers that allows an individual data file from a larger file set to be retrieved according to its file name or its MIME type. The data objects allow facile visualization and retrieval for reuse of the data and facilitates other operations such as data mining. Examples from five recently published articles illustrate these concepts.

A very promising effort to integrate published content and electronic notebooks in chemistry. Encouraging that in addition to the technical and identity issues the authors also point out the lack of incentives for the extra work required to achieve useful integration.

Everyone agrees that deeper integration of resources in the sciences will be a game-changer but renewing the realization that there is no such thing as a free lunch, is an important step towards that goal.

This article easily repays a close read with interesting subject identity issues and the potential that topic maps would offer to such an effort.

Crime, Prisons and Punishment

July 1st, 2015

Crime, Prisons and Punishment

From the webpage:

Just how murky is your past? Are there law breakers or law makers in your family tree? Whether your family history contains vice or virtue, with our Crime and Punishment month we’ll be giving you the opportunity to find out, with blogs, articles and videos to help you research your criminal ancestry.

Launched to coincide with our release of almost 2 million crime and punishment records – made available online for the first time only on Findmypast – our Crime and Punishment month explores the seedy underbelly of our family histories.

In addition to our helpful blogs and videos, we’ll have stories of the criminals amongst our record collections, fun games and quizzes and case studies of the amazing criminal ancestry discoveries made by our users. Find out more over on our blog!

I don’t usually post about strictly commercial sites but this one has “family reunion” written all over it. Appears to be focused on the UK, Australia, etc.

If you have any ancestors in the records covered, it could be a real conversation starter at your next family event. ;-)

One Million Contributors to the Huffington Post

July 1st, 2015

Arianna Huffington’s next million mark by Ken Doctor.

From the post:

Before the end of this year, HuffPost will release new tech and a new app, opening the floodgates for contributors. The goal: Add 900,000 contributors to Huffington Post’s 100,000 current ones. Yes, one million in total.

How fast would Arianna like that to get that number?

“One day,” she joked, as we discussed her latest project, code-named Donatello for the Renaissance sculptor. Lots of people got to be Huffington Post contributors through Arianna Huffington. They’d meet her at book signing, send an email and find themselves hooked up. “It’s one of my favorite things,” she told me Thursday. Now, though, that kind of retail recruitment may be a vestige.

“It’s been an essential part of our DNA,” she said, talking about the user contributions that once seemed to outnumber the A.P. stories and smaller original news staff’s work. “We’ve always been a hybrid platform,” a mix of pros and contributors.

So what enables the new strategy? Technology, naturally.

HuffPost’s new content management system is now being extended to work as a self-publishing platform as well. It will allow contributors to post directly from their smartphones, and add in video. Behind the scenes, a streamlined approval system is intended to reduce human (editor) intervention. Get approved once, then publish away, “while preserving the quality,” Huffington added.

Adding another 900,000 contributors to the Huffington Post is going to bump their content production substantially.

So, here’s the question: Searching the Huffington Post site is as bad as most other media sites. What is adding content from another 900,000 contributors going to do for that experience? Get worse? That’s my first bet.

On the other hand, what if authors can unknowingly create topic maps? For example, auto-tagging offers Wikipedia links (one or more) for an entity in a story, for relationships, a drop down menu with roles for the major relationship types (slept-with being available for inside the Beltway), with auto-generated relationships to the author, events mentioned, other content at the Huffington Post.

Don’t solve the indexing/search problem after the fact, create smarter data up front. Promote the content with better tagging and relationships. With 1 million unpaid contributors trying to get their contributions noticed, a win-win situation.

Independence Day: Should We Celebrate Our Government?

July 1st, 2015

While you are celebrating July 4th, Independence Day in the United States, there will be lots of flag waving and laudatory things being said about our government.

You have seen stories on this blog about government misconduct and even more the main stream news. With all of the emphasis on the honors that veterans should have on the 4th of July, let’s take time to remember that our government doesn’t honor veterans.

Quite the contrary, it conceals inhuman experiments upon veterans, lies about its efforts to locate them, and ultimately fails to right the wrongs it has done.

Caitlin Dickerson in The VA’s Broken Promise To Thousands Of Vets Exposed To Mustard Gas writes of one such case:

In secret chemical weapons experiments conducted during World War II, the U.S. military exposed thousands of American troops to mustard gas.

When those experiments were formally declassified in the 1990s, the Department of Veterans Affairs made two promises: to locate about 4,000 men who were used in the most extreme tests, and to compensate those who had permanent injuries.

But the VA didn’t uphold those promises, an NPR investigation has found.

NPR interviewed more than 40 living test subjects and family members, and they describe an unending cycle of appeals and denials as they struggled to get government benefits for mustard gas exposure. Some gave up out of frustration.

In more than 20 years, the VA attempted to reach just 610 of the men, with a single letter sent in the mail. Brad Flohr, a VA senior adviser for benefits, says the agency couldn’t find the rest, because military records of the experiments were incomplete.

“There was no identifying information,” he says. “No Social Security numbers, no addresses, no … way of identifying them. Although, we tried.”

Yet in just two months, an NPR research librarian located more than 1,200 of them, using the VA’s own list of test subjects and public records.

The mustard gas experiments were conducted at a time when American intelligence showed that enemy gas attacks were imminent. The tests evaluated protective equipment like gas masks and suits. They also compared the relative sensitivity of soldiers, including tests designed to look for racial difference.

The test subjects who are still alive are now in their 80s and 90s. Each year more of their stories die with them.

Our government, the one you are celebrating on July 4th, conducted secret and inhumane experiments on its own troops, concealed those experiments for approximately fifty (50) years, when discovered, promised to find these veterans and to compensate them, those promises being bald face lies.

I can’t think of a good reason to celebrate our government. Can you?

DEFT Zero RC1 ready for download

July 1st, 2015

DEFT Zero RC1 ready for download

From the post:

During the fourth edition of DEFTCON 2015 in Rome last April 17 (more than 200 people, high level of teaching) in collaboration with ISACA chapter of Rome and Tech & Law Center, DEFT Zero is finally ready and released in RC mode (release candidate).

This mini distro dedicated to acquisitions of medias, implements the new system of write blocking amply explained into the new DEFT Zero user manuals released on this site.

Download DEFT Zero user guide
Download DEFT Zero RC1

From the foreword of the quick guide:

DEFT Zero is designed to be a DEFT light version focused on the forensically copy of digital evidences (i.e. hard disks, USB devices and network shares).

DEFT Zero requires a considerably lower space in RAM and on a CDROM/pendrive. It needs about 400 Mbytes, which can even boot in the RAM preloaded mode on a obsolete and low resources hardware.

DEFT Zero is based on Lubuntu 14.04.02 LTS and its future releases will be developed in parallel with DEFT full version.

DEFT Zero can be run on newest hardware as well, since it can support 32 and 64 bits platforms, with UEFI and Secure Boot such as Macbooks and Windows 8 ready machines.

This document will cover the differences and enhancement with DEFT standard (full) version.

The latest manual I saw on the site was for DEFT (Digital Evidence & Forensic Toolkit) 7 Manual, dated 2012.

The DEFT 7 manual says:

On these pages you will not find exhaustive explanations on the use of all applications and commands currently in the DEFT distribution.

Judging from the download directory for DEFT full, the most recent version of DEFT full is 8.2.

A great authoring/editing opportunity for anyone interested in cybersecurity.

Having to write down a coherent explanation is almost as much of a learning experience teaching the material!

PS: Travel/business tip: Always carry marked burner USB drives with security tape to signal their use. (And don’t reuse.)

Homographic Phishing

July 1st, 2015

Lloydsbank, IIoydsbank – researcher highlights the homographic phishing problem by Graham Cluley.

Homographs are words that share the same form but have a different meaning.

Think of bow:

bow-ribbon

and bow:

bow-recurve

Graham’s post is about words that “look alike” due to default font sets, like an uppercase “I” and lowercase “l.”

In his post you will find the familiar lloydsbank.co.uk (legitimate) being confused with IIoydsbank.co.uk (not a legitimate site). The second site starts with double ii written in capitals. ;-)

Graham has also written about Cyrillic letters that are very similar to Latin ones in Wɑit! Stοp! Is that ℓιηκ what it claims to be?

I don’t know of a survey of all the “similar” letters in Unicode but they aren’t limited to Cyrillic.

If such a list were available, users could be warned by browsers that the default font was displaying non-Latin characters (which are auto-corrected by your brain).

Graham concludes with good advice:

Maybe the best advice of all is to never click on links to financial websites if you receive them in an email or see them on a website.