Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

March 8, 2015

Lies, Damned Lies, and Clapper (2015)

Filed under: Government,Intelligence,Politics — Patrick Durusau @ 9:10 am

Worldwide Threat Assessment of the US Intelligence Community 2015 by James R Clapper (Director of National Intelligence).

The amazing thing about Director of National Intelligence (DNI) Clapper is that he remains out of prison and uncharged for his prior lies to Congress.

Clapper should get points for an amazing lack of self-awareness when he addresses the issue of unknown integrity of information due to cyber attacks:

Decision making by senior government officials (civilian and military), corporate executives, investors, or others will be impaired if they cannot trust the information they are receiving.

Decision making by members of congress (senior government officials) and member of the public are impaired when they can’t obtain trust information from government agencies and their leaders.

In that regard, the 2015 threat assessment is incomplete. It should have included threats that the US public faces, cyber and otherwise from its own government.

February 21, 2015

Basic Understanding of Big Data…. [The need for better filtering tools]

Filed under: BigData,Intelligence — Patrick Durusau @ 11:12 am

Basic Understanding of Big Data. What is this and How it is going to solve complex problems by Deepak Kumar.

From the post:

Before going into details about what is big data let’s take a moment to look at the below slides by Hewlett-Packard.

What_is_BigData

The post goes on to describe big data but never quite reaches saying how it will solve complex problems.

I mention it for the HP graphic that illustrates the problem of big data for the intelligence community.

Yes, they have big data as in the three V’s: volume, variety, velocity and so need processing infrastructure to manage that as input.

However, the results they seek are not the product of summing clicks, likes, retweets, ratings and/or web browsing behavior, at least not for the most part.

The vast majority of the “big data” at their disposal is noise that is masking a few signals that they wish to detect.

I mention that because of the seeming emphasis of late on real time or interactive processing of large quantities of data, which isn’t a bad thing, but also not a useful thing when what you really want are the emails, phone contacts and other digital debris of say < one thousand (1,000) people (that number was randomly chosen as an illustration, I have no idea of the actual number of people being monitored). It may help to think of big data in the intelligence community as consisting of a vast amount of "big data" about which it doesn't care and a relatively tiny bit of data that it cares about a lot. The problem being one of separating the data into those two categories. Take the telephone metadata records as an example. There is some known set of phone numbers that are monitored and contacts to and from those numbers. The rest of the numbers and their data are of interest if and only if at some future date they are added to the known set of phone numbers to be monitored. When the monitored numbers and their metadata are filtered out, I assume that previously investigated numbers for pizza delivery, dry cleaning and the like are filtered from the current data, leaving only current high value contacts or new unknowns for investigation. An emphasis on filtering before querying big data would reduce the number of spurious connections simply because a smaller data set has less random data that could be seen as patterns with other data. Not to mention that the smaller the data set, more prior data could be associated with current data without overwhelming the analyst. You may start off with big data but the goal is a very small amount of actionable data.

February 16, 2015

Intelligence Sharing, Crowd Sourcing and Good News for the NSA

Filed under: Crowd Sourcing,Intelligence,NSA — Patrick Durusau @ 3:11 pm

Lisa Vaas posted an entertaining piece today with the title: Are Miami cops really flooding Waze with fake police sightings?. Apparently an NBC affiliate (not FOX, amazing) tried its hand at FUD, alleging that Miami police officers were gaming Waze.

There is a problem with that theory, which Lisa points out quoting Julie Mossler, a spokes person for Waze:

Waze algorithms rely on crowdsourcing to confirm or negate what has been reported on the road. Thousands of users in Florida do this, both passively and actively, every day. In addition, we place greater trust in reports from heavy users and terminate accounts of those whose behavior demonstrate a pattern of contributing false information. As a result the Waze map will remain reliable and updated to the minute, reflecting real-time conditions.

Oops!

See Lisa’s post for the blow-by-blow account of this FUD attempt by the NBC affiliate.

However foolish an attempt to game Waze would be, it is a good example to promote the sharing of intelligence.

Think about it. Rather than the consensus poop that emerges as the collaboration of the senior management in intelligence agencies, why not share all intelligence between agencies between working analysts addressing the same areas or issues? Make the “crowd” people who have similar security clearances and common subject areas. And while contributions are trackable within a agency, to the “crowd,” everyone has a handle and their contributions on shared intelligence is voted up or down. Just like with Waze, people will develop reputations within the system.

I assume for turf reasons you could put handles on the intelligence so the participants would not know its origins as well, just until people started building up trust in the system.

Changing the cultures at the intelligence agencies, which hasn’t succeeded since 9/11, would require a more dramatic approach than has been tried to date. My suggestion is to give the Inspector Generals the ability to block promotions and/or fire people in the intelligence agencies who don’t actively promote the sharing of intelligence. Where “actively promotes” is measured by intelligence shared and not activities to plan to share intelligence, etc.

Unless and until there are consequences for the failure of members of the intelligence community to put the interests of their employers (in this case, citizens of the United States) above their own or that of their agency, the failure to share intelligence since 9/11 will continue.

PS: People will object that the staff in question have been productive, loyal, etc., etc. in the past. The relevant question is whether they have the skills and commitment that is required now? The answer to that last question is either yes or no. Employment is an opportunity to perform, not an entitlement.

February 14, 2015

Mercury [March 5, 2015, Washington, DC]

Filed under: Government,Government Data,Intelligence — Patrick Durusau @ 7:47 pm

Mercury Registration Deadline: February 17, 2015.

From the post:

The Intelligence Advanced Research Projects Activity (IARPA) will host a Proposers’ Day Conference for the Mercury Program on March 5, in anticipation of the release of a new solicitation in support of the program. The Conference will be held from 8:30 AM to 5:00 PM EST in the Washington, DC metropolitan area. The purpose of the conference will be to provide introductory information on Mercury and the research problems that the program aims to address, to respond to questions from potential proposers, and to provide a forum for potential proposers to present their capabilities and identify potential team partners.

Program Description and Goals

Past research has found that publicly available data can be used to accurately forecast events such as political crises and disease outbreaks. However, in many cases, relevant data are not available, have significant lag times, or lack accuracy. Little research has examined whether data from foreign Signals Intelligence (SIGINT) can be used to improve forecasting accuracy in these cases.

The Mercury Program seeks to develop methods for continuous, automated analysis of SIGINT in order to anticipate and/or detect political crises, disease outbreaks, terrorist activity, and military actions. Anticipated innovations include: development of empirically driven sociological models for population-level behavior change in anticipation of, and response to, these events; processing and analysis of streaming data that represent those population behavior changes; development of data extraction techniques that focus on volume, rather than depth, by identifying shallow features of streaming SIGINT data that correlate with events; and development of models to generate probabilistic forecasts of future events. Successful proposers will combine cutting-edge research with the ability to develop robust forecasting capabilities from SIGINT data.

Mercury will not fund research on U.S. events, or on the identification or movement of specific individuals, and will only leverage existing foreign SIGINT data for research purposes.

The Mercury Program will consist of both unclassified and classified research activities and expects to draw upon the strengths of academia and industry through collaborative teaming. It is anticipated that teams will be multidisciplinary, and might include social scientists, mathematicians, statisticians, computer scientists, content extraction experts, information theorists, and SIGINT subject matter experts with applied experience in the U.S. SIGINT System.

Attendees must register no later than 6:00 pm EST, February 27, 2015 at http://events.SignUp4.com/MercuryPDRegistration_March2015. Directions to the conference facility and other materials will be provided upon registration. No walk-in registrations will be allowed.

I might be interested if you can hide me under a third or fourth level sub-contractor. 😉

Seriously, it isn’t that I despair of the legitimate missions of intelligence agencies but I do despise waste on ways known to not work. Government funding, even unlimited funding, isn’t going to magically confer the correct semantics on data or enable analysts to meaningfully share their work products across domains.

You would think going on fourteen (14) years post-9/11 and not being one step closer to preventing a similar event, that would be a “wake-up” call to someone. If not in the U.S. intelligence community, perhaps in intelligence communities who tire of aping the U.S. community with no better results.

January 17, 2015

Bulk Collection of Signals Intelligence: Technical Options (2015)

Filed under: Intelligence,NSA — Patrick Durusau @ 8:07 pm

Bulk Collection of Signals Intelligence: Technical Options (2015)

Description:

The Bulk Collection of Signals Intelligence: Technical Options study is a result of an activity called for in Presidential Policy Directive 28, issued by President Obama in January 2014, to evaluate U.S. signals intelligence practices. The directive instructed the Office of the Director of National Intelligence (ODNI) to produce a report within one year “assessing the feasibility of creating software that would allow the intelligence community more easily to conduct targeted information acquisition rather than bulk collection.” ODNI asked the National Research Council (NRC) — the operating arm of the National Academy of Sciences and National Academy of Engineering — to conduct a study, which began in June 2014, to assist in preparing a response to the President. Over the ensuing months, a committee of experts appointed by the Research Council produced the report.

Believe it or not, you can’t copy-n-paste from the pre-publication PDF file. Truly irritating.

From the report:

Conclusion 1. There is no software technique that will fully substitute for bulk collection where it is relied on to answer queries about the past after new targets become known.

A key value of bulk collection is its record of past signals intelligence that may be relevant to subsequent investigations. If past events become interesting in the present, because intelligence-gathering priorities change to include detection of new kinds of threats or because of new events such as the discovery that an individual is a terrorist, historical events and the context they provide will be available for analysis only if they were previously collected. (Emphasis in the original)

The report dodges any questions about effectiveness or appropriateness of bulk collection of signals data. However, its number one conclusion provides all the ammunition one needs to establish that bulk signals intelligence gathering is a clear and present danger to the American people and any semblance of a democratic government.

Would deciding that all Muslims from the Middle East represented potential terrorist threats to the United States qualify as a change in intelligence-gathering priorities? So all the bulk signals data from Muslims and their contacts in the United States suddenly becomes fair game for the NSA to investigate?

I don’t think any practicing Muslim is a threat to any government but you saw how quickly the French backslide into bigotry after Charlie Hebdo. Maybe they didn’t have that far to go. Not any further than large segments of the U.S. population.

Our National Research Council is too timid voice an opinion other than to say if you don’t preserve signals records you can’t consult them in the future. But whether there is any danger or is this a good policy choice, they aren’t up for those questions.

The focus on signals intelligence makes you wonder how local and state police have operated all these years without bulk signals intelligence? How have they survived without it? Well, for one thing they are out in the communities they serve, not cooped up in cube farms with other people who don’t have any experience with the communities in question. Simply being a member of the community makes them aware of new comers, changes in local activity, etc.

Traditional law enforcement doesn’t stop crime as a general rule because that would require too much surveillance and resources to be feasible. When a crime has been committed, law enforcement gathers evidence and in a very large (90%+) number of cases, captures the people responsible.

Which is a interesting parallel to the NSA, which has also not stopped any terrorist plots as far as anyone knows. Well, there as that case in the State of Georgia where two aging alcoholics were boosting about producing Ricin and driving down I-285 throwing it out the window. The government got a convicted child molester to work as in informant to put those two very dangerous terrorists in jail. And I don’t think the NSA was in on that one anyway.

If the NSA has stopped a major terrorist plot, something that actually was going to be another 9/11, you know it would have been leaked long before now. The absence of such leaks is the best evidence for the lack of any viable terrorist threats in the United States that I can think of.

And what if we stop bulk signals data collection and there is another terrorist attack? So, what is your question? Bulk signals collection hasn’t stopped one so far so if we stop bulk signals collection and there is another terrorist attack, look at all the money we will have saved for the same result. Just as a policy matter, we shouldn’t spend money for no measurable result.

If you really think terrorism is a threat, take the money from bulk signal data collection and fund state and local police hiring, training and paying (long term, not just a grant) more local police officers out in their communities. That will do more to reduce the potential for all types of crimes, including those labeled as terrorism.

To put it another way, bulk signal data collection is a form of wealth sharing, wealth sharing from the public treasury to contractor’s. Wealth sharing that has been shown to be ineffectual against terrorism. Why continue it?

November 16, 2014

Defence: a quick guide to key internet links

Filed under: Defense,Intelligence,Military — Patrick Durusau @ 6:55 pm

Defence: a quick guide to key internet links by David Watt and Nicole Brangwin.

While browsing at Full Text Reports, I saw this title with the following listing of contents:

  • Australian Parliament
  • Australian Government
  • Military history
  • Strategic studies
  • Australian think tanks and non-government organisations
  • International think tanks and organisations
  • Foreign defence

The document is a five (5) page PDF file that has a significant number of links, particularly to Australian military resources. Under “Foreign defense” I did find the Chinese Peoples’ Liberation Army but no link for ISIL.

This may save you some time if you are spidering Australian military sites but appears to be incomplete for other areas.

September 24, 2014

Intelligence Community On/Off The Record

Filed under: Intelligence,Security — Patrick Durusau @ 2:51 pm

While looking up a particular NSA leak today I discovered:

IC On The Record

Direct access to factual information related to the lawful foreign surveillance activities of the U.S. Intelligence Community.

Created at the direction of the President of the United States and maintained by the Office of the Director of National Intelligence.

and,

IC Off The Record

Direct access to leaked information related to the surveillance activities of the U.S. Intelligence Community and their partners.

IC Off The Record points to IC On The Record but the reverse isn’t true.

When you visit IC On The Record, tweet about IC Off The Record. Help everyone come closer to a full understanding of the intelligence community.

September 21, 2014

Fixing Pentagon Intelligence [‘data glut but an information deficit’]

Filed under: Intelligence,Marketing,Topic Maps — Patrick Durusau @ 4:24 pm

Fixing Pentagon Intelligence by John R. Schindler.

From the post:

The U.S. Intelligence Community (IC), that vast agglomeration of seventeen different hush-hush agencies, is an espionage behemoth without peer anywhere on earth in terms of budget and capabilities. Fully eight of those spy agencies, plus the lion’s share of the IC’s budget, belong to the Department of Defense (DoD), making the Pentagon’s intelligence arm something special. It includes the intelligence agencies of all the armed services, but the jewel in the crown is the National Security Agency (NSA), America’s “big ears,” with the National Geospatial-Intelligence Agency (NGA), which produces amazing imagery, following close behind.

None can question the technical capabilities of DoD intelligence, but do the Pentagon’s spies actually know what they are talking about? This is an important, and too infrequently asked, question. Yet it was more or less asked this week, in a public forum, by a top military intelligence leader. The venue was an annual Washington, DC, intelligence conference that hosts IC higher-ups while defense contractors attempt a feeding frenzy, and the speaker was Rear Admiral Paul Becker, who serves as the Director of Intelligence (J2) on the Joint Chiefs of Staff (JCS). A career Navy intelligence officer, Becker’s job is keeping the Pentagon’s military bosses in the know on hot-button issues: it’s a firehose-drinking position, made bureaucratically complicated because JCS intelligence support comes from the Defense Intelligence Agency (DIA), which is an all-source shop that has never been a top-tier IC agency, and which happens to have some serious leadership churn at present.

Admiral Becker’s comments on the state of DoD intelligence, which were rather direct, merit attention. Not surprisingly for a Navy guy, he focused on China. He correctly noted that we have no trouble collecting the “dots” of (alleged) 9/11 infamy, but can the Pentagon’s big battalions of intel folks actually derive the necessary knowledge from all those tasty SIGINT, HUMINT, and IMINT morsels? Becker observed — accurately — that DoD intelligence possesses a “data glut but an information deficit” about China, adding that “We need to understand their strategy better.” In addition, he rued the absence of top-notch intelligence analysts of the sort the IC used to possess, asking pointedly: “Where are those people for China? We need them.”

Admiral Becker’s:

data glut but an information deficit” (emphasis added)

captures the essence of phone record subpoenas, mass collection of emails, etc., all designed to give the impression of frenzied activity, with no proof of effectiveness. That is an “information deficit.”

Be reassured you can host a data glut in a topic map so topic maps per se are not a threat to current data gluts. It is possible, however, to use topic maps over existing data gluts to create information and actionable intelligence. Without disturbing the underlying data gluts and their contractors.

I tried to find a video of Adm. Becker’s presentation but apparently the Intelligence and National Security Security Summit 2014 does not provide video recording of presentations. Whether that is to prevent any contemporaneous record being kept of remarks or just being low-tech kinda folks isn’t clear.

I can point out the meeting did have a known liar, “The Honorable James Clapper,” on the agenda. Hard to know if having perjured himself in front of Congress has made him gun shy of recorded speeches or not. (For Clapper’s latest “spin,” on “the least untruthful,” see: James Clapper says he misspoke, didn’t lie about NSA surveillance.) One hopes by next year’s conference Clapper will appear as: James Clapper, former DNI, convicted felon, Federal Prison Register #….

If you are interested in intelligence issues, you should be following John R. Schindler. A U.S. perspective but handling issues in intelligence with topic maps will vary in the details but not the underlying principles from one intelligence service to another.

Disclosure: I rag on the intelligence services of the United States due to greater access to public information on those services. Don’t take that as greater interest how their operations could be improved by topic maps over other intelligence services.

I am happy to discuss how your intelligence services can (or can’t) be improved by topic maps. There are problems, such as those discussed by Admiral Becker, that can’t be fixed by using topic maps. I will be as quick to point those out as I will problems where topic maps are relevant. My goal is your satisfaction that topic maps made a difference for you, not having a government entity in a billing database.

June 21, 2014

Storing and visualizing LinkedIn…

Filed under: Intelligence,Neo4j,Social Networks,Visualization — Patrick Durusau @ 4:42 pm

Storing and visualizing LinkedIn with Neo4j and sigma.js by Bob Briody.

From the post:

In this post I am going to present a way to:

  • load a linkedin networkvia the linkedIn developer API into neo4j using python
  • serve the network from neo4j using node.js, express.js, and cypher
  • display the network in the browser using sigma.js

Great post but it means one (1) down and two hundred and five (205) more to go, if you are a member of the social networks listed on List of social networking websites at Wikipedia, and that excludes dating sites and includes only “notable, well-known sites.”

I would be willing to bet that your social network of friends, members of your religious organization, people where you work, etc. would start to swell the number of other social networks that number you as a member.

Hmmm, so one off social network visualizations are just that, one off social network visualizations. You can been seen as part of one group and not say two or three intersecting groups.

Moreover, an update to one visualized network isn’t going to percolate into another visualized network.

There is the “normalize your graph” solution to integrate such resources but what if you aren’t the one to realize the need for “normalization?”

You have two separate actors in your graph visualization after doing the best you can. Another person encounters information indicating these “two” people are in fact one person. They update their data. But that updated knowledge has no impact on your visualization, unless you simply happen across it.

Seems like a poor way to run intelligence gathering doesn’t it?

June 4, 2014

Health Intelligence

Filed under: Data Mining,Intelligence,Visualization — Patrick Durusau @ 4:55 pm

Health Intelligence: Analyzing health data, generating and communicating evidence to improve population health. by Ramon Martinez.

I was following a link to Ramon’s Data Sources page when I discovered his site. The list of data resources is long and impressive.

But there is so much more under Resources!

  • Data Tools
  • Database (DB) Blogs
  • Data Visualization Tools
  • Data Viz Blogs
  • Reading for Data Visualizations
  • Best of the Web…
  • Tableau Training
  • Going to School
  • Reading for Health Analysis

You will probably like the rest of the site as well!

Data tools/visualization are very ecumenical.

May 29, 2014

Open-Source Intelligence

Filed under: Intelligence,Open Source — Patrick Durusau @ 7:10 pm

Big data brings new power to open-source intelligence by Matthew Moran.

From the post:

In November 2013, the New Yorker published a profile of Eliot Higgins – or Brown Moses as he is known to almost 17,000 Twitter followers. An unemployed finance and admin worker at the time, Higgins was held up as an example of what can happen when we take advantage of the enormous amount of information being spread across the internet every day. The New Yorker’s eight-page spread described Higgins as “perhaps the foremost expert on the munitions used in the [Syrian] war”, a remarkable description for someone with no formal training in munitions or intelligence.

Higgins does not speak Arabic and has never been to the Middle East. He operates from his home in Leicester and, until recently, conducted his online investigations as an unpaid hobby. Yet the description was well-founded. Since starting his blog in 2012, Higgins has uncovered evidence of the Syrian army’s use of cluster bombs and exposed the transfer of weapons from Iran to Syria. And he has done it armed with nothing more than a laptop and an eye for detail.

This type of work is a form of open-source intelligence. Higgins exploits publicly accessible material such as online photos, video and social media updates to piece together information about the Syrian conflict. His analyses have formed the basis of reports in The Guardian and a blog for The New York Times, while his research has been cited by Human Rights Watch.

Matthew makes a compelling case for open-source intelligence, using Eliot Higgins as an example.

No guarantees of riches or fame but data is out there to be mined and curated.

All that is required is for you to find it, package it and find the right audience and/or buyer.

No small order but what else are you doing this coming weekend? 😉

PS: Where would you place requests for intelligence or offer intelligence for sale? Just curious.

Global Data of Events, Languages, and Tones

Filed under: GDELT,Intelligence — Patrick Durusau @ 6:55 pm

More than 250 million global events are now in the cloud for anyone to analyze be Derrick Harris.

From the post:

Georgetown University researcher Kalev Leetaru has spent years building the Global Database of Events, Languages, and Tones. It now contains data on more than 250 million events dating back to 1979 and updated daily, with 58 different fields apiece, across 300 categories. Leetaru uses it to produce a daily report analyzing global stability. He and others have used it to figure out whether the kidnapping of 200 Nigerian girls was a predictable event and watch Crimea turn into a hotspot of activity leading up to ex-Ukrainian Viktor Yanukovych’s ouster and Russia’s subsequent invasion.

“The idea of GDELT is how do we create a catalog, essentially, of everything that’s going on across the planet, each day,” Leetaru explained in a recent interview.

And now all of it is available in the cloud, for free, for anybody to analyze as they desire. Leetaru has partnered with Google, where he has been hosting GDELT for the past year, to make it available (here) as a public dataset that users can analyze directly with Google BigQuery. Previously, anyone interested in the data had to download the 100-gigabyte dataset and analyze it on their own machines. They still can, of course, and Leetaru recently built a catalog of recipes for various analyses and a BigQuery-based method for slicing off specific parts of the data.

See Derrick’s post for additional details.

When I previously wrote about GDELT it wasn’t available for querying with Google’s BigQuery. That should certainly improve access to this remarkable resource.

Perhaps intelligence gathering/analysis will become a cottage industry.

That’s a promising idea.

See also: Google BigQuery homepage.

April 16, 2014

‘immersive intelligence’ [Topic Map-like application]

Filed under: Intelligence,Subject Identity,Topic Maps — Patrick Durusau @ 10:03 am

Long: NGA is moving toward ‘immersive intelligence’ by Sean Lyngaas.

From the post:

Of the 17 U.S. intelligence agencies, the National Geospatial-Intelligence Agency is best suited to turn big data into actionable intelligence, NGA Director Letitia Long said. She told FCW in an April 14 interview that mapping is what her 14,500-person agency does, and every iota of intelligence can be attributed to some physical point on Earth.

“We really are the driver for intelligence integration because everything is somewhere on the Earth at a point in time,” Long said. “So we give that ability for all of us who are describing objects to anchor it to the Map of the World.”

NGA’s Map of the World entails much more minute information than the simple cartography the phrase might suggest. It is a mix of information from top-secret, classified and unclassified networks made available to U.S. government agencies, some of their international partners, commercial users and academic experts. The Map of the World can tap into a vast trove of satellite and social media data, among other sources.

NGA has made steady progress in developing the map, Long said. Nine data layers are online and available now, including those for maritime and aeronautical data. A topography layer will be added in the next two weeks, and two more layers will round out the first operational version of the map in August.

Not surprisingly, the National Geospatial-Intelligence Agency sees geography as the organizing principal for intelligence integration. Or as as NGA Director Long says: “…everything is somewhere on the Earth at a point in time.” I can’t argue with the accuracy of that statement, save for extraterrestrial events, satellites, space-based weapons, etc.

On the other hand, you could gather intelligence by point of origin, places referenced, people mentioned (their usual locations), etc., in languages spoken by more than thirty (30) million people and you could have a sack with intelligence in forty (40) languages. List of languages by number of native speakers

When I say “topic map-like” application, I mean that the NGA has chosen geographic locations as the organizing principle for intelligence as opposed to using subjects as the organizing principle for intelligence, of which geographic location is only one type. Noting that with a broader organizing principle, it would be easier to integrate data from other agencies who have their own organizational principles for the intelligence they gather.

I like the idea of “layers” as described in the post. In part because a topic map can exist as an additional layer on top of the current NGA layers to integrate other intelligence data on a subject basis with the geographic location system of the NGA.

Think of topic maps as being “in addition to” and not “instead of” your current integration technology.

What’s your principle for organizing intelligence? Would it be useful to integrate data organized around other principles for organizing intelligence? And still find the way back to the original data?

PS: Do you remember the management book “Who Moved My Cheese?” Moving intelligence from one system to another can result in: “Who Moved My Intelligence?,” when it can no longer be discovered by its originator. Not to mention the intelligence will lack the context of its point of origin.

March 7, 2014

Who Are the Customers for Intelligence?

Filed under: Intelligence,Marketing — Patrick Durusau @ 8:37 pm

Who Are the Customers for Intelligence? by Peter C. Oleson.

From the paper:

Who uses intelligence and why? The short answer is almost everyone and to gain an advantage. While nation-states are most closely identified with intelligence, private corporations and criminal entities also invest in gathering and analyzing information to advance their goals. Thus the intelligence process is a service function, or as Australian intelligence expert Don McDowell describes it,

Information is essential to the intelligence process. Intelligence… is not simply an amalgam of collected information. It is instead the result of taking information relevant to a specific issue and subjecting it to a process of integration, evaluation, and analysis with the specific purpose of projecting future events and actions, and estimating and predicting outcomes.

It is important to note that intelligence is prospective, or future oriented (in contrast to investigations that focus on events that have already occurred).

As intelligence is a service, it follows that it has customers for its products. McDowell differentiates between “clients” and “customers” for intelligence. The former are those who commission an intelligence effort and are the principal recipients of the resulting intelligence product. The latter are those who have an interest in the intelligence product and could use it for their own purposes. Most scholars of intelligence do not make this distinction. However, it can be an important one as there is an implied priority associated with a client over a customer. (footnote markers omitted)

If you want to sell the results of topic maps, that is highly curated data that can be viewed from multiple perspectives, this essay should spark your thinking about potential customers.

You may also find this website useful: Association of Former Intelligence Officers.

I first saw this at Full Text Reports as Who Are the Customers for Intelligence? (draft).

October 8, 2013

Splunk Enterprise 6

Filed under: Intelligence,Machine Learning,Operations,Splunk — Patrick Durusau @ 3:27 pm

Splunk Enterprise 6

The latest version of Splunk is described as:

Operational Intelligence for Everyone

Splunk Enterprise is the leading platform for real-time operational intelligence. It’s the easy, fast and secure way to search, analyze and visualize the massive streams of machine data generated by your IT systems and technology infrastructure—physical, virtual and in the cloud.

Splunk Enterprise 6 is our latest release and delivers:

  • Powerful analytics for everyone—at amazing speeds
  • Completely redesigned user experience
  • Richer developer environment to easily extend the platform

The current download page promises the enterprise version for 60 days. At the end of that period you can convert to a Free license or purchase an Enterprise license.

June 9, 2013

IEEE Intelligence and Security Informatics 2013

Filed under: Intelligence,Security — Patrick Durusau @ 4:49 pm

IEEE Intelligence and Security Informatics 2013

The conference ran from June 4 – 7, 2013 and with recent disclosures about the NSA, is a subject of interest.

You may want to scan the program (title link) for topics and/or researchers of interest.

May 19, 2013

Got Balls?

Filed under: Intelligence,Military,Security — Patrick Durusau @ 8:16 am

IED Trends: Turning Tennis Balls Into Bombs

From the post:

Terrorists are relentlessly evolving tactics and techniques for IEDs (Improvised Explosive Devices), and analyzing reporting on IEDs can provide insight complementary to HUMINT on emerging militant methods. Preparing for an upcoming webcast with our friends at Terrogence, we found incidents using sports balls, particularly tennis balls and cricket balls, more frequently appearing as a delivery vehicle for explosives.

When we break these incidents from the last four months down by location, the city of Karachi in southern Pakistan stands out as a hotbed. There is also evidence that this tactic is being embraced around the globe as you can see sports balls fashioned into bombs found from Longview, Washington in the United States to Varanasi in India.

We can use Recorded Future’s Web Intelligence platform to plot out the locations where incidents have recently occurred as well as the frequency and timing.

Interesting but the military, by their stated doctrines, should be providing this information in theater specific IED briefings.

See for example: FMI 3-34.119/MCIP 3-17.01 IMPROVISED EXPLOSIVE DEVICE DEFEAT

On boobytraps (the old name) in general, see: FM 5-31 Boobytraps (1965), which includes pressure cookers (pp. 73-74) and rubber balls (p. 87).

Topic maps offer over rapid dissemination of “new” forms and checklists for where they may be found. (As opposed to static publications.)

Interesting that FM 5-31 reports an electric iron as boobytrap, but an electric iron is more likely to show up on Antiques Roadshow than as an IED.

At least in the United States.

April 13, 2013

Office of the Director of National Intelligence: Data Mining 2012

Filed under: Data Mining,Intelligence — Patrick Durusau @ 6:57 pm

Office of the Director of National Intelligence: Data Mining 2012

Office of the Director of National Intelligence = ODNI

To cut directly to the chase:

II. ODNI Data Mining Activities

The ODNI did not engage in any activities to use or develop data mining functionality during the reporting period.

My source, KDNuggets, provides the legal loophole analysis.

Who watches the watchers?

Looks like that it’s going to be you and me.

Every citizen who recognizes a government employee, agent, official, tweet the name you know them by with your location.

Just that.

If enough of us do that, patterns will begin to appear in the data stream.

If enough patterns appear in the data stream, the identities of government employees, agents, officials, will slowly become known.

Transparency won’t happen overnight or easily.

But if you are waiting for the watchers to watch themselves, you are going to be severely disappointed.

March 29, 2013

FLOPS Fall Flat for Intelligence Agency

Filed under: HPC,Intelligence,RFI-RFP,Semantics — Patrick Durusau @ 9:39 am

FLOPS Fall Flat for Intelligence Agency by Nicole Hemsoth.

From the post:

The Intelligence Advanced Research Projects Activity (IARPA) is putting out some RFI feelers in hopes of pushing new boundaries with an HPC program. However, at the core of their evaluation process is an overt dismissal of current popular benchmarks, including floating operations per second (FLOPS).

To uncover some missing pieces for their growing computational needs, IARPA is soliciting for “responses that illuminate the breadth of technologies” under the HPC umbrella, particularly the tech that “isn’t already well-represented in today’s HPC benchmarks.”

The RFI points to the general value of benchmarks (Linpack, for instance) as necessary metrics to push research and development, but argues that HPC benchmarks have “constrained the technology and architecture options for HPC system designers.” More specifically, in this case, floating point benchmarks are not quite as valuable to the agency as data-intensive system measurements, particularly as they relate to some of the graph and other so-called big data problems the agency is hoping to tackle using HPC systems.

Responses are due by Apr 05, 2013 4:00 pm Eastern.

Not that I expect most of you to respond to this RFI but I mention it as a step in the right direction for the processing of semantics.

Semantics are not native to vector fields and so every encoding of semantics in a vector field is a mapping.

As is every extraction of semantic from a vector field is the reverse of that mapping process.

The impact of this mapping/unmapping of semantics to and from a vector field on interpretation are unclear.

As mapping and unmapping decisions are interpretative, it seems reasonable to conclude there is some impact. How much isn’t known.

Vector fields are easy for high FLOPS systems to process but do you want a fast inaccurate answer or one that bears some resemblance to reality as experienced by others?

Graph databases, to name one alternative, are the current rage, at least according to graph database vendors.

But saying “graph database,” isn’t the same as usefully capturing semantics with a graph database.

Or processing semantics once captured.

What we need is an alternative to FLOPS that represents effective processing of semantics.

Suggestions?

March 14, 2013

Worldwide Threat Assessment…

Filed under: Cybersecurity,Intelligence,Security — Patrick Durusau @ 9:35 am

Worldwide Threat Assessment of the US Intelligence Community, Senate Select Committee on Intelligence, James R. Clapper, Director of National Intelligence, March 12, 2013.

Thought you might be interested in the cybersecurity parts, marketing literature stuff if your interests lie towards security issues.

It has tidbits like this one:

Foreign intelligence and security services have penetrated numerous computer networks of US Government, business, academic, and private sector entities. Most detected activity has targeted unclassified networks connected to the Internet, but foreign cyber actors are also targeting classified networks. Importantly, much of the nation’s critical proprietary data are on sensitive but unclassified networks; the same is true for most of our closest allies. (emphasis added)

Just curious, if you discovered your retirement funds were in your mail box, would you move them to a more secure location?

Depending on the products or services you are selling, the report may have other marketing information.

I first saw this in a tweet by Jeffrey Carr.

March 13, 2013

Hiding in Plain Sight/Being Secure From The NSA

Filed under: Cryptography,Cybersecurity,Intelligence,Security — Patrick Durusau @ 3:15 pm

I presume that if a message can be “overhear,” electronically or otherwise, it is likely the NSA and other “fictional” groups are capturing it.

The use of encryption marks you as a possible source of interest.

You can use image-based steganography to conceal messages but that requires large file sizes and is subject to other attacks.

Professor Abdelrahman Desoky of the University of Maryland in Baltimore County, USA, suggests that messages can be hidden in plain sight, but changing the wording of jokes to carry a secret message.

Desoky suggests that instead of using a humdrum text document and modifying it in a codified way to embed a secret message, correspondents could use a joke to hide their true meaning. As such, he has developed an Automatic Joke Generation Based Steganography Methodology (Jokestega) that takes advantage of recent software that can automatically write pun-type jokes using large dictionary databases. Among the automatic joke generators available are: The MIT Project, Chuck Norris Joke Generator, Jokes2000, The Joke Generator dot Com and the Online Joke Generator System (pickuplinegen).

A simple example might be to hide the code word “shaking” in the following auto-joke. The original question and answer joke is “Where do milk shakes come from?” and the correct answer would be “From nervous cows.” So far, so funny. But, the system can substitute the word “shaking” for “nervous” and still retain the humor so that the answer becomes “From shaking cows.” It loses some of its wit, but still makes sense and we are not all Bob Hopes, after all. [Hiding Secret Messages in Email Jokes]

Or if you prefer the original article abstract:

This paper presents a novel steganography methodology, namely Automatic Joke Generation Based Steganography Methodology (Jokestega), that pursues textual jokes in order to hide messages. Basically, Jokestega methodology takes advantage of recent advances in Automatic Jokes Generation (AJG) techniques to automate the generation of textual steganographic cover. In a corpus of jokes, one may judge a number of documents to be the same joke although letters, locations, and other details are different. Generally, joke and puns could be retold with totally different vocabulary, while still retaining their identities. Therefore, Jokestega pursues the common variations among jokes to conceal data. Furthermore, when someone is joking, anything may be said which legitimises the use of joke-based steganography. This makes employing textual jokes very attractive as steganographic carrier for camouflaging data. It is worth noting that Jokestega follows Nostega paradigm, which implies that joke-cover is noiseless. The validation results demonstrate the effectiveness of Jokestega. is only available to individual subscribers or to users at subscribing institutions. [Jokestega: automatic joke generation-based steganography methodology by Abdelrahman Desoky. International Journal of Security and Networks (IJSN), Vol. 7, No. 3, 2012]

If you are interested, other publications by Professor Desoky are listed here.

Occurs to me that topic maps offer the means to create steganography chains over public channels. The sender may know its meaning but there can be several links in the chain of transmission that change the message but have no knowledge of its meaning. And/or that don’t represent traceable links in the chain.

With every “hop” and/or mapping of the terms to another vocabulary, the task of statistical analysis grows more difficult.

Not the equivalent of highly secure communication networks, the contents of which can be copied onto a Lady Gaga DVD, but then not everyone needs that level of security.

Some people need cheaper but more secure systems for communication.

Will devote some more thought to the outline of a topic map system for hiding content in plain sight.

March 12, 2013

Fast Data Gets A Jump On Big Data

Filed under: BigData,Decision Making,Intelligence — Patrick Durusau @ 2:59 pm

Fast Data Gets A Jump On Big Data by Hasan Rizvi.

The title reminded me of a post by Sam Hunting that asked: “How come we’ve got Big Data and not Good Data?”

Now “big data” is to give way to “fast data.”

From the post:

Today, both IT and business users alike are facing business scenarios where they need better information to differentiate, innovate, and radically transform their business.

In many cases, that transformation is being enabled by a move to “Big Data.” Organizations are increasingly collecting vast quantities of real-time data from a variety of sources, from online social media data to highly-granular transactional data to data from embedded sensors. Once collected, users or businesses are mining the data for meaningful patterns that can be used to drive business decisions or actions.

Big Data uses specialized technologies (like Hadoop and NoSQL) to process vast amounts of information in bulk. But most of the focus on Big Data so far has been on situations where the data being managed is basically fixed—it’s already been collected and stored in a Big Data database.

This is where Fast Data comes in. Fast Data is a complimentary approach to Big Data for managing large quantities of “in-flight” data that helps organizations get a jump on those business-critical decisions. Fast Data is the continuous access and processing of events and data in real-time for the purposes of gaining instant awareness and instant action. Fast Data can leverage Big Data sources, but it also adds a real-time component of being able to take action on events and information before they even enter a Big Data system.

Sorry Sam, “good data” misses out again.

Data isn’t the deciding factor in human decision making, instant or otherwise, see Thinking, Fast and Slow by Daniel Kahnman.

Supplying decision makers with good data and sufficient time to consider it, is the route to better decision making.

Of course, that leaves time to discover the poor quality of data provided by fast/big data delivery mechanisms.

February 24, 2013

In-Q-Tel (IQT)

Filed under: Funding,Intelligence — Patrick Durusau @ 8:31 pm

In-Q-Tel (IQT)

From the about page:

THE IQT MISSION

Launched in 1999 as an independent, not-for-profit organization, IQT was created to bridge the gap between the technology needs of the U.S. Intelligence Community (IC) and new advances in commercial technology. With limited insight into fast-moving private sector innovation, the IC needed a way to find emerging companies, and, more importantly, to work with them. As a private company with deep ties to the commercial world, we attract and build relationships with technology startups outside the reach of the Intelligence Community. In fact, more than 70 percent of the companies that IQT partners with have never before done business with the government.

As a strategic investor, our model is unique. We make investments in startup companies that have developed commercially-focused technologies that will provide strong, near-term advantages (within 36 months) to the IC mission. We design our strategic investments to accelerate product development and delivery for this ready-soon innovation, and specifically to help companies add capabilities needed by our customers in the Intelligence Community. Additionally, IQT effectively leverages its direct investments by attracting a significant amount of private sector funds, often from top-tier venture capital firms, to co-invest in our portfolio companies. On average, for every dollar that IQT invests in a company, the venture capital community has invested over nine dollars, helping to deliver crucial new capabilities at a lower cost to the government.

Topic maps could offer advantages to an intelligence community, either vis-à-vis other intelligence communities and/or vis-à-vis competitors in the same intelligence community.

A funding source to consider for topic maps in intelligence work.

I first saw this at Beyond Search.

January 19, 2013

Building the Library of Twitter

Filed under: Intelligence,Security,Tweets — Patrick Durusau @ 7:08 pm

Building the Library of Twitter by Ian Armas Foster.

From the post:

On an average day people around the globe contribute 500 million messages to Twitter. Collecting and storing every single tweet and its resulting metadata from a single day would be a daunting task in and of itself.

The Library of Congress is trying something slightly more ambitious than that: storing and indexing every tweet ever posted.

With the help of social media facilitator Gnip, the Library of Congress aims to create an archive where researchers can access any tweet recorded since Twitter’s inception in 2006.

According to this update on the progress of the seemingly herculean project, the LOC has already archived 170 billion tweets and their respective metadata. That total includes the posts from 2006-2010, which Gnip compressed and sent to the LOC over three different files of 2.3 terabytes each. When the LOC uncompressed the files, they filled 20 terabytes’ worth of server space representing 21 billion tweets and its supplementary 50 metadata fields.

It is often said that 90% of the world’s data has accrued over the last two years. That is remarkably close to the truth for Twitter, as an additional 150 billion tweets (88% of the total) poured into the LOC archive in 2011 and 2012. Further, Gnip delivers hourly updates to the tune of half a billion tweets a day. That means 42 days’ worth of 2012-2013 tweets equal the total amount from 2006-2010. In all, they are dealing with 133.2 terabytes of information.

Now there’s a big data problem for you! Not to mention a resource problem for the Library of Congress.

You might want to make a contribution to help fund their work on this project.

Obviously of incredible value for researchers at all levels, smaller sub-sets of the Twitter stream may be valuable as well.

If I were designing a Twitter based lexicon for covert communication for example, I would want to use frequent terms from particular geographic locations.

And/or create patterns of tweets from particular accounts so that they don’t stand out from others.

Not to mention trying to crunch the Twitter stream for content I know must be present.

Federal Big Data Forum

Filed under: BigData,Conferences,Intelligence,Security — Patrick Durusau @ 7:07 pm

Are you architecting sensemaking solutions in the national security space? Register for 30 Jan Federal Big Data Forum sponsored by Cloudera by Bob Gourley.

From the post:

Friends at Cloudera are lead sponsors and coordinators of a new Big Data Forum focused on Apache Hadoop. The first, which will be held 30 January 2013 in Columbia Maryland, will be focused on lessons learned of use to the national security community. This is primarily for practitioners and leaders fielding real working Big Data solutions on Apache Hadoop and related technologies. I’ve seen a draft agenda, it includes a lineup of the nation’s greatest Big Data technologists, including the chairman of the Apache Software foundation and creator of Hadoop, Lucene and Nutch Doug Cutting.

This event is intentionally being focused on real practitioners and others who can benefit from lessons learned by those who have created/fielded real enterprise solutions. This will fill up fast. Please mark you calendar now and register right away. To register see: http://info.cloudera.com/federal-big-data-hadoop-forum.html

Bob’s post also has the invite.

I won’t be able to attend but would love to hear from anyone who does. Thanks!

December 24, 2012

Geospatial Intelligence Forum

Filed under: Integration,Intelligence,Interoperability — Patrick Durusau @ 2:32 pm

Geospatial Intelligence Forum: The Magazine of the National Intelligence Community

Apologies but I could not afford a magazine subscription for every reader of this blog.

The next best thing is a free magazine that may be useful in your data integration/topic map practice.

Defense intelligence has been a hot topic for the last decade and there are no signs that is going to change any time soon.

I was browsing through Geospatial Intelligence Forum (GIF) when I encountered:

Closing the Interoperability Gap by Cheryl Gerber.

From the article:

The current technology gaps can be frustrating for soldiers to grapple with, particularly in the middle of battlefield engagements. “This is due, in part, to stovepiped databases forcing soldiers who are working in tactical operations centers to perform many work-arounds or data translations to present the best common operating picture to the commander,” said Dr. Joseph Fontanella, AGC director and Army geospatial information officer.

Now there is a use case for interoperability, being “…in the middle of battlefield engagements.”

Cheryl goes on to identify five (5) gaps in interoperability.

GIF looks like a good place to pick up riffs, memes, terminology and even possible contacts.

Enjoy!

December 4, 2012

INSA Highlights Increasing Importance of Open Source

Filed under: Government,Government Data,Intelligence — Patrick Durusau @ 12:52 pm

INSA Highlights Increasing Importance of Open Source

From Recorded Future*:

The Intelligence and National Security Alliance (INSA) Rebalance Task Force recently released its new white paper “Expectations of Intelligence in the Information Age“.

We’re obviously big fans of open source analysis, so some of the lead observations reported by the task force really hit home. Here they are, as written by INSA:

  • The heightened expectations of decision makers for timely strategic warning and current intelligence can be addressed in significant ways by the IC through “open sourcing” of information.
  • “Open sourcing” will not replace traditional intelligence; decision makers will continue to expect the IC to extract those secrets others are determined to keep from the United States.
  • However, because decision makers will access open sources as readily as the IC, they will expect the IC to rapidly validate open source information and quickly meld it with that derived from espionage and traditional sources of collection to provide them with the knowledge desired to confidently address national security issues and events.

You can check out an interactive version of the full report here, and take a moment to visit Recorded Future to see how we’re embracing this synthesis of open source and confidential intelligence.

I have confidence that the IC will find ways to make their collection, recording, analysis and synthesis of information with traditional intelligence sources incompatible with each other.

After all, we are less than five (5) years away from some unknown level of sharing of traditional intelligence data: Read’em and Weep.

Let’s say there is some sort of intelligence sharing by 2017 (2012 + 5). That’s sixteen (16) years after 9/11.

Being mindful that sharing doesn’t mean integrated into the information flow of the respective agencies.

How does that saying go?

Once is happenstance.

Twice is coincidence.

Three times is enemy action?

Where does the continuing failure to share intelligence fall on that list?

(Topic maps can’t provide the incentives to make sharing happen, but they do make sharing possible for people with incentives to share.)


* I listed the entry as originating from Recorded Future. Why some blog authors find it difficult to identify themselves I cannot say.

October 20, 2012

How Google’s Dremel Makes Quick Work of Massive Data

Filed under: BigData,Dremel,Intelligence — Patrick Durusau @ 3:13 pm

How Google’s Dremel Makes Quick Work of Massive Data by Ian Armas Foster.

From the post:

The ability to process more data and the ability to process data faster are usually mutually exclusive. According to Armando Fox, professor of computer science at University of California at Berkeley, “the more you do one, the more you have to give up on the other.”

Hadoop, an open-source, batch processing platform that runs on MapReduce, is one of the main vehicles organizations are driving in the big data race.

However, Mike Olson, CEO of Cloudera, an important Hadoop-based vendor, is looking past Hadoop and toward today’s research projects. That includes one named Dremel, possibly Google’s next big innovation that combines the scale of Hadoop with the ever-increasing speed demands of the business intelligence world.

“People have done Big Data systems before,” Fox said “but before Dremel, no one had really done a system that was that big and that fast.”

On Dremel, see: Dremel: Interactive Analysis of Web-Scale Datasets, as well.

Are you looking (or considering looking) beyond Hadoop?

Accuracy and timeliness beyond the average daily intelligence briefing will drive demand for your information product.

Your edge is agility. Use it.

Sneak Peek into Skybox Imaging’s Cloudera-powered Satellite System [InaaS?]

Filed under: BigData,Cloudera,Geographic Data,Geography,Intelligence — Patrick Durusau @ 3:02 pm

Sneak Peek into Skybox Imaging’s Cloudera-powered Satellite System by Justin Kestelyn (@kestelyn)

This is a guest post by Oliver Guinan, VP Ground Software, at Skybox Imaging. Oliver is a 15-year veteran of the internet industry and is responsible for all ground system design, architecture and implementation at Skybox.

One of the great promises of the big data movement is using networks of ubiquitous sensors to deliver insights about the world around us. Skybox Imaging is attempting to do just that for millions of locations across our planet.

Skybox is developing a low cost imaging satellite system and web-accessible big data processing platform that will capture video or images of any location on Earth within a couple of days. The low cost nature of the satellite opens the possibility of deploying tens of satellites which, when integrated together, have the potential to image any spot on Earth within an hour.

Skybox satellites are designed to capture light in the harsh environment of outer space. Each satellite captures multiple images of a given spot on Earth. Once the images are transferred from the satellite to the ground, the data needs to be processed and combined to form a single image, similar to those seen within online mapping portals.

With any sensor network, capturing raw data is only the beginning of the story. We at Skybox are building a system to ingest and process the raw data, allowing data scientists and end users to ask arbitrary questions of the data, then publish the answers in an accessible way and at a scale that grows with the number of satellites in orbit. We selected Cloudera to support this deployment.

Now is the time to start planning topic map based products that can incorporate this type of data.

There are lots of folks who are “curious” about what is happening next door, in the next block, a few “klicks” away, across the border, etc.

Not all of them have the funds for private “keyhole” satellites and vacuum data feeds. But they may have money to pay you for efficient and effective collation of intelligence data.

Topic maps empowering “Intelligence as a Service (InaaS)”?

October 3, 2012

News Reporting, Not Just DHS Fusion Centers, Ineffectual

Filed under: Intelligence,News,Security — Patrick Durusau @ 4:20 am

A report by the United States Senate, PERMANENT SUBCOMMITTEE ON INVESTIGATIONS, Committee on Homeland Security and Governmental Affairs, FEDERAL SUPPORT FOR AND INVOLVEMENT IN STATE AND LOCAL FUSION CENTERS (link to page with actual report), was described this way in the New York Times coverage:

One of the nation’s biggest domestic counterterrorism programs has failed to provide virtually any useful intelligence, according to Congressional investigators.

Their scathing report, to be released Wednesday, looked at problems in regional intelligence-gathering offices known as “fusion centers” that are financed by the Department of Homeland Security and created jointly with state and local law enforcement agencies.

The report found that the centers “forwarded intelligence of uneven quality — oftentimes shoddy, rarely timely, sometimes endangering citizens’ civil liberties and Privacy Act protections, occasionally taken from already published public sources, and more often than not unrelated to terrorism.”

The investigators reviewed 610 reports produced by the centers over 13 months in 2009 and 2010. Of these, the report said, 188 were never published for use within the Homeland Security Department or other intelligence agencies. Hundreds of draft reports sat for months, awaiting review by homeland security officials, making much of their information obsolete. And some of the reports appeared to be based on previously published information or facts that had on long since been reported through the Federal Bureau of Investigation.

What is remarkable about a link to a page with the actual report?

After reading the New York Times article, I looked for a link in the article to the report. Nada. Zip. The null string. No link.

Searching over news reports from other major news outlets, same result.

Searching the US Senate, PERMANENT SUBCOMMITTEE ON INVESTIGATIONS website, at least as of 5:00 AM Eastern Standard time on October 3, 2012, fails to produce the report.

We aren’t lacking the “semantic web.”

There is a lack of linking to information sources. Links empower the reader to make their own judgements.

I expect “shoddy reporting” from the Department of Homeland Security. I don’t expect it from the New York Times. Or other major news outlets.

The report will be a “brief flash in the pan.” The news cycle will move onto the latest political gaffe or fraud, just as DHS folk move onto other ineffectual activities.

Would be nice to link up names, events, etc., from the report, to past and future mentions of the same people and events.

Imagine Senator Levin asking: “This is your fifth appearance on questionable spending of government funds, in four separate agencies, under two different administrations?”

Accountability and transparency, a topic maps double shot.

« Newer PostsOlder Posts »

Powered by WordPress