How To Avoid Lying to Government Agents (Memorize)

April 26th, 2017

How to Avoid Going to Jail under 18 U.S.C. Section 1001 for Lying to Government Agents by Solomon L. Wisenberg.

Great post but Wisenberg buries his best advice twelve paragraphs into the story. (Starts with: “Is there an intelligent alternative to lying….”)

Memorize this sentence:

I will not answer any questions without first consulting an attorney.

That’s it. Short, sweet and to the point. Make no statements at all other than that one. No “I have nothing to hide,” etc.

It’s like name, rank, serial number you see in the old war movies. Don’t say anything other than that sentence.

For every statement a government agent makes, simply repeat that sentence. Remember, you can’t lie if you don’t say anything other than that sentence.

See Wisenberg’s post for the details but the highlighted sentence is the only one you need.

How Do Hackers Live on $53.57? (‘Hack the Air Force’)

April 26th, 2017

I ask because once you get past the glowing generalities of USAF Launches ‘Hack the Air Force’:

Let the friendly hacking fly: The US Air Force will allow vetted white hat hackers and other computer security specialists root out vulnerabilities in some of its main public websites.

You find:


Reina Staley, chief of staff for the Defense Digital Service, notes that white-hat hacking and crowdsourced security initiatives are often used used by small businesses and large companies to beef up their security. Payouts for Hack the Air Force will be made based on the severity of the exploit discovered, and there will be only one payout per exploit.

Staley notes that the DoD’s Hack the Pentagon initiative, which was launched in April 2016 by the Defense Digital Service, was the federal government’s first bug bounty program. More than 1,400 hackers registered to participate, and DoD paid $75,000 in bounties.

“In the past, we contracted to a security research firm and they found less than 20 unique vulnerabilities,” Staley explains. “For Hack the Pentagon, the 1,400 hackers found 138 unique vulnerabilities, most of them previously unknown.”

Kim says Hack the Air Force is all about being more proactive in finding security flaws and fixing them quickly. “While the money is a draw, we’re also finding that people want to participate in the program for patriotic reasons as well. People want to see the Internet and Armed Forces networks become safer,” he says.

Let’s see, $75,000 split between 1,400 hackers, that’s $53.57 per hacker, on average. Some got more than average, some got nothing at all.

‘Hack the Air Force’ damages the defensive cybersecurity labor market by driving down the compensation for cybersecurity skills. Skills that take time, hard work, talent to develop, but the Air Force devalues them with chump change.

I fully agree with anyone who says government, DoD or Air Force cybersecurity sucks.

However, the Air Force chose to spend money on valets, chauffeurs for its generals, fighter jets that randomly burst into flames, etc., just as they chose to neglect cybersecurity.

Not my decision, not my problem.

Want an effective solution?

First step, “…use the free market Luke!” Create an Air Force contact point where hackers can anonymously submit notices of vulnerabilities. Institute a reliable and responsive process that offers compensation (market-based compensation) for those finds. Compensation paid in bitcoins.

Bearing in mind that paying market rate and adhering to market reasonable responsiveness will be critical to success of such a portal. Yes, in a “huffy” voice, “you are the US Air Force,” but hackers will have something you need and cannot supply yourself. Live with it.

Second step, create a very “lite” contracting process when you need short-term cybersecurity audits or services. That means abandoning the layers of reports and graft of primes, sub-primes and sub-sub-primes, with all the feather nesting of contract officers, etc., along the way. Oh, drug tests as well. You want results, not squeaky clean but so-so hackers.

Third step, disclose vulnerabilities in other armed services, both domestic and foreign. Time spent hacking them is time not spent hacking you. Yes?

Until the Air Force stops damaging the defensive cybersecurity labor market, boycott the ‘Hack the Air Force’ at HackerOne and all similar efforts.

Is This Public Sector Corruption Map Knowingly False?

April 26th, 2017

The The New York Times, , Google and Facebook would all report no.

Knowingly false?

It uses the definition of “corruption” in McCutcheon v. Federal Election Comm’n, 134 S. Ct. 1434 (2014).

Chief Justice Roberts writing for the majority:


Moreover, while preventing corruption or its appearance is a legitimate objective, Congress may target only a specific type of corruption—“quid pro quo” corruption. As Buckley explained, Congress may permissibly seek to rein in “large contributions [that] are given to secure a political quid pro quo from current and potential office holders.” 424 U. S., at 26. In addition to “actual quid pro quo arrangements,” Congress may permissibly limit “the appearance of corruption stemming from public awareness of the opportunities for abuse inherent in a regime of large individual financial contributions” to particular candidates. Id., at 27; see also Citizens United, 558 U. S., at 359 (“When Buckley identified a sufficiently important governmental interest in preventing corruption or the appearance of corruption, that interest was limited to quid pro quo corruption”).

Spending large sums of money in connection with elections, but not in connection with an effort to control the exercise of an officeholder’s official duties, does not give rise to such quid pro quo corruption. Nor does the possibility that an individual who spends large sums may
garner “influence over or access to” elected officials or political parties. Id., at 359; see McConnell v. Federal Election Comm’n, 540 U.S. 93, 297 (2003) (KENNEDY, J., concurring in judgment in part and dissenting in part). And because the Government’s interest in preventing the
appearance of corruption is equally confined to the appearance of quid pro quo corruption, the Government may not seek to limit the appearance of mere influence or access. See Citizens United, 558 U. S., at 360.
… (page 20)

But with the same “facts,” if your definition of “quid pro quo” included campaign contributions, then this map is obviously false.

In fact, Christopher Robertson, D. Alex Winkelman, Kelly Bergstrand, and Darren Modzelewski, in The Appearance and the Reality of Quid Pro Quo Corruption: An Empirical Investigation Journal of Legal Analysis (2016) 8 (2): 375-438. DOI: https://doi.org/10.1093/jla/law006, conduct an empirical investigation into how jurors could view campaign contributions as “quid pro quo.”

Abstract:

The Supreme Court says that campaign finance regulations are unconstitutional unless they target “quid pro quo” corruption or its appearance. To test those appearances, we fielded two studies. First, in a highly realistic simulation, three grand juries deliberated on charges that a campaign spender bribed a Congressperson. Second, 1271 representative online respondents considered whether to convict, with five variables manipulated randomly. In both studies, jurors found quid pro quo corruption for behaviors they believed to be common. This research suggests that Supreme Court decisions were wrongly decided, and that Congress and the states have greater authority to regulate campaign finance. Prosecutions for bribery raise serious problems for the First Amendment, due process, and separation of powers. Safe harbors may be a solution.

Using Robertson, et al., “quid pro quo,” or even a more reasonable definition of “corruption:”

Transparency International defines corruption broadly as the abuse of entrusted power for private gain. (What is Public Sector Corruption?)

a re-colorization of the map shows a different reading of corruption in the United States:

Do you think the original map (top) is going to appear with warnings it depends on how you define corruption?

Or with a note saying a definition was chosen to conceal corruption of the US government?

I didn’t think so either.

PS: The U.S. has less minor corruption than many countries. The practice of and benefits from corruption are limited to the extremely wealthy.

Metron – A Fist Full of Subjects

April 24th, 2017

Metron – Apache Incubator

From the description:

Metron integrates a variety of open source big data technologies in order to offer a centralized tool for security monitoring and analysis. Metron provides capabilities for log aggregation, full packet capture indexing, storage, advanced behavioral analytics and data enrichment, while applying the most current threat-intelligence information to security telemetry within a single platform.

Metron can be divided into 4 areas:

  1. A mechanism to capture, store, and normalize any type of security telemetry at extremely high rates. Because security telemetry is constantly being generated, it requires a method for ingesting the data at high speeds and pushing it to various processing units for advanced computation and analytics.
  2. Real time processing and application of enrichments such as threat intelligence, geolocation, and DNS information to telemetry being collected. The immediate application of this information to incoming telemetry provides the context and situational awareness, as well as the “who” and “where” information that is critical for investigation.
  3. Efficient information storage based on how the information will be used:
    1. Logs and telemetry are stored such that they can be efficiently mined and analyzed for concise security visibility
    2. The ability to extract and reconstruct full packets helps an analyst answer questions such as who the true attacker was, what data was leaked, and where that data was sent
    3. Long-term storage not only increases visibility over time, but also enables advanced analytics such as machine learning techniques to be used to create models on the information. Incoming data can then be scored against these stored models for advanced anomaly detection.
  4. An interface that gives a security investigator a centralized view of data and alerts passed through the system. Metron’s interface presents alert summaries with threat intelligence and enrichment data specific to that alert on one single page. Furthermore, advanced search capabilities and full packet extraction tools are presented to the analyst for investigation without the need to pivot into additional tools.

Big data is a natural fit for powerful security analytics. The Metron framework integrates a number of elements from the Hadoop ecosystem to provide a scalable platform for security analytics, incorporating such functionality as full-packet capture, stream processing, batch processing, real-time search, and telemetry aggregation. With Metron, our goal is to tie big data into security analytics and drive towards an extensible centralized platform to effectively enable rapid detection and rapid response for advanced security threats.

Some useful links:

Metron (website)

Metron wiki

Metron Jira

Metron Git

Security threats aren’t going to assign themselves unique and immutable IDs. Which means they will be identified by characteristics and associated with particular acts (think associations), which are composed of other subjects, such as particular malware, dates, etc.

Being able to robustly share such identifications (unlike the “we’ve seen this before at some unknown time, with unknown characteristics,” typical of Russian attribution reports) would be a real plus.

Looks like a great opportunity for topic maps-like thinking.

Yes?

3 Reasons to Read: Algorithms to Live By

April 24th, 2017

How Algorithms can untangle Human Questions. Interview with Brian Christian by Roberto V. Zican.

The entire interview is worth your study but the first question and answer establish why you should read Algorithms to Live By:

Q1. You have worked with cognitive scientist Tom Griffiths (professor of psy­chol­ogy and cognitive science at UC Berkeley) to show how algorithms used by computers can also untangle very human questions. What are the main lessons learned from such a joint work?

Brian Christian: I think ultimately there are three sets of insights that come out of the exploration of human decision-making from the perspective of computer science.

The first, quite simply, is that identifying the parallels between the problems we face in everyday life and some of the canonical problems of computer science can give us explicit strategies for real-life situations. So-called “explore/exploit” algorithms tell us when to go to our favorite restaurant and when to try something new; caching algorithms suggest — counterintuitively — that the messy pile of papers on your desk may in fact be the optimal structure for that information.

Second is that even in cases where there is no straightforward algorithm or easy answer, computer science offers us both a vocabulary for making sense of the problem, and strategies — using randomness, relaxing constraints — for making headway even when we can’t guarantee we’ll get the right answer every time.

Lastly and most broadly, computer science offers us a radically different picture of rationality than the one we’re used to seeing in, say, behavioral economics, where humans are portrayed as error-prone and irrational. Computer science shows us that being rational means taking the costs of computation — the costs of decision-making itself — into account. This leads to a much more human, and much more achievable picture of rationality: one that includes making mistakes and taking chances.
… (emphasis in original)

After the 2016 U.S. presidential election, I thought the verdict that humans are error-prone and irrational was unassailable.

Looking forward to the use of a human constructed lens (computer science) to view “human questions.” There are answers to “human questions” baked into computer science so watching the authors unpack those will be an interesting read. (Waiting for my copy to arrive.)

Just so you know, the Picador edition is a reprint. It was originally published by William Collins, 21/04/2016 in hardcover, see: Algorithms to Live By, a short review by Roberto Zicari, October 24, 2016.

Scotland Yard Outsources Violation of Your Privacy

April 24th, 2017

Whistleblower uncovers London police hacking of journalists and protestors by Trevor Johnson.

From the post:

The existence of a secretive unit within London’s Metropolitan Police that uses hacking to illegally access the emails of hundreds of political campaigners and journalists has been revealed. At least two of the journalists work for the Guardian.

Green Party representative in the British House of Lords, Jenny Jones, exposed the unit’s existence in an opinion piece in the Guardian. The facts she revealed are based on a letter written to her by a whistleblower.

The letter reveals that through the hacking, Scotland Yard has illegally accessed the email accounts of activists for many years, and this was possible due to help from “counterparts in India.” The letter alleged that the Metropolitan Police had asked police in India to obtain passwords on their behalf—a job that the Indian police subcontracted out to groups of hackers in India.

The Indian hackers sent back the passwords obtained, which were then used illegally by the unit within the Met to gather information from the emails of those targeted.

Trevor covers a number of other points, additional questions that should be asked, the lack of media coverage over this latest outrage, etc., all of which merit your attention.

From my perspective, these abuses by the London Metropolitan Police (Scotland Yard), are examples of the terrorism bogeyman furthering government designs against quarrelsome but otherwise ordinary citizens.

Quarrelsome but otherwise ordinary citizens are far safer and easier to spy upon than seeking out actual wrongdoers. And spying justifies part of Scotland Yard’s budget, since everyone “knows” a lack of actionable intelligence means terrorists are hiding successfully, not the more obvious lack of terrorists to be found.

As described in Trevor’s post, Scotland Yard, like all other creatures of government, thrives in shadows. Shadows where its decisions are beyond discussion and reproach.

In choosing between supporting government spawned creatures that live in the shadows and working to dispel the shadows that foster them, remember they are not, were not and never will be “…on you side.”

They have a side, but it most assuredly is not yours.

Leaking Improves Security – Secrecy Weakens It

April 24th, 2017

If you need a graphic for the point that leaking improves security – secrecy weakens it, consider this one:

Ask your audience:

Prior to the Shadow Brokers leak of the NSA’s DoublePulsar Malware, how many people were researching a counter to it?

Same question, but substitute: After the Shadow Brokers leak ….

As the headline says: Leaking Improves Security – Secrecy Weakens It.

Image originates from: Over 36,000 Computers Infected with NSA’s DoublePulsar Malware by Catalin Cimpanu.

Anyone who suggests otherwise wants you and others to be insecure.

Fraudulent Peer Review – Clue? Responded On Time!

April 23rd, 2017

107 cancer papers retracted due to peer review fraud by Cathleen O’Grady.

As if peer review weren’t enough of a sham, some authors took it to another level:


It’s possible to fake peer review because authors are often asked to suggest potential reviewers for their own papers. This is done because research subjects are often blindingly niche; a researcher working in a sub-sub-field may be more aware than the journal editor of who is best-placed to assess the work.

But some journals go further and request, or allow, authors to submit the contact details of these potential reviewers. If the editor isn’t aware of the potential for a scam, they then merrily send the requests for review out to fake e-mail addresses, often using the names of actual researchers. And at the other end of the fake e-mail address is someone who’s in on the game and happy to send in a friendly review.

Fake peer reviewers often “know what a review looks like and know enough to make it look plausible,” said Elizabeth Wager, editor of the journal Research Integrity & Peer Review. But they aren’t always good at faking less obvious quirks of academia: “When a lot of the fake peer reviews first came up, one of the reasons the editors spotted them was that the reviewers responded on time,” Wager told Ars. Reviewers almost always have to be chased, so “this was the red flag. And in a few cases, both the reviews would pop up within a few minutes of each other.”

I’m sure timely submission of reviews weren’t the only basis for calling fraud but it is an amusing one.

It’s past time to jettison the bloated machinery of peer review. Judge work by its use, not where it’s published.

Anonymous Domain Registration Service [Update: 24 April 2017]

April 23rd, 2017

Pirate Bay Founder Launches Anonymous Domain Registration Service

Does this sound anonymous to you?


With Njalla, customers don’t buy the domain names themselves, they let the company do it for them. This adds an extra layer of protection but also requires some trust.

A separate agreement grants the customer full usage rights to the domain. This also means that people are free to transfer it elsewhere if they want to.

“Think of us as your friendly drunk (but responsibly so) straw person that takes the blame for your expressions,” Njalla notes.

Njalla

Perhaps I’m being overly suspicious but what is the basis for trusting Njalla?

I would feel better if Njalla only possessed a key that would decrypt (read authenticate) messages as arriving from the owner of some.domain.

Other than payment, what other interest do they have in an owner’s actual identity?

Perhaps I should bump them about that idea.


Update: On further inquiry, registration only requires an email or jabber contact point. You can handle being anonymous to Njalla at those points. So, more anonymous than I thought.

Dissing Facebook’s Reality Hole and Impliedly Censoring Yours

April 23rd, 2017

Climbing Out Of Facebook’s Reality Hole by Mat Honan.

From the post:

The proliferation of fake news and filter bubbles across the platforms meant to connect us have instead divided us into tribes, skilled in the arts of abuse and harassment. Tools meant for showing the world as it happens have been harnessed to broadcast murders, rapes, suicides, and even torture. Even physics have betrayed us! For the first time in a generation, there is talk that the United States could descend into a nuclear war. And in Silicon Valley, the zeitgeist is one of melancholy, frustration, and even regret — except for Mark Zuckerberg, who appears to be in an absolutely great mood.

The Facebook CEO took the stage at the company’s annual F8 developers conference a little more than an hour after news broke that the so-called Facebook Killer had killed himself. But if you were expecting a somber mood, it wasn’t happening. Instead, he kicked off his keynote with a series of jokes.

It was a stark disconnect with the reality outside, where the story of the hour concerned a man who had used Facebook to publicize a murder, and threaten many more. People used to talk about Steve Jobs and Apple’s reality distortion field. But Facebook, it sometimes feels, exists in a reality hole. The company doesn’t distort reality — but it often seems to lack the ability to recognize it.

I can’t say I’m fond of the Facebook reality hole but unlike Honan:


It can make it harder to use its platforms to harass others, or to spread disinformation, or to glorify acts of violence and destruction.

I have no desire to censor any of the content that anyone cares to make and/or view on it. Bar none.

The “default” reality settings desired by Honan and others are a thumb on the scale for some cause they prefer over others.

Entitled to their preference but I object to their setting the range of preferences enjoyed by others.

You?

ARM Releases Machine Readable Architecture Specification (Intel?)

April 22nd, 2017

ARM Releases Machine Readable Architecture Specification by Alastair Reid.

From the post:

Today ARM released version 8.2 of the ARM v8-A processor specification in machine readable form. This specification describes almost all of the architecture: instructions, page table walks, taking interrupts, taking synchronous exceptions such as page faults, taking asynchronous exceptions such as bus faults, user mode, system mode, hypervisor mode, secure mode, debug mode. It details all the instruction formats and system register formats. The semantics is written in ARM’s ASL Specification Language so it is all executable and has been tested very thoroughly using the same architecture conformance tests that ARM uses to test its processors (See my paper “Trustworthy Specifications of ARM v8-A and v8-M System Level Architecture”.)

The specification is being released in three sets of XML files:

  • The System Register Specification consists of an XML file for each system register in the architecture. For each register, the XML details all the fields within the register, how to access the register and which privilege levels can access the register.
  • The AArch64 Specification consists of an XML file for each instruction in the 64-bit architecture. For each instruction, there is the encoding diagram for the instruction, ASL code for decoding the instruction, ASL code for executing the instruction and any supporting code needed to execute the instruction and the decode tree for finding the instruction corresponding to a given bit-pattern. This also contains the ASL code for the system architecture: page table walks, exceptions, debug, etc.
  • The AArch32 Specification is similar to the AArch64 specification: it contains encoding diagrams, decode trees, decode/execute ASL code and supporting ASL code.

Alastair provides starting points for use of this material by outlining his prior uses of the same.

Raises the question why an equivalent machine readable data set isn’t available for Intel® 64 and IA-32 Architectures? (PDF manuals)

The data is there, but not in a machine readable format.

Anyone know why Intel doesn’t provide the same convenience?

Journalism Is Skepticism as a Service (SaaS)

April 22nd, 2017

Image from the Fourth Estate Journalism Association.

I applaud the sentiment and supporting the Fourth Estate is one way to bring it closer to reality.

At the same time, unless and until The New York Times, National Public Radio, and others start reporting US terrorist attacks (bombings) with the same terminology as so-called “terrorists” in their coverage, “Journalism Is Skepticism as a Service (SaaS)” remains an aspiration, not a reality.

Shortfall in Peer Respect and Accomplishment

April 22nd, 2017

I didn’t expect UK government confirmation of my post: Shortfall in Cypbersecurity Talent or Compensation? so quickly!

I argued against the groundless claims of a shortage of cybersecurity talent in the face of escalating cybercrime and hacking statistics.

If there were a shortage of cybersecurity talent, cybercrime should be going down. But it’s not.

The National Crime Agency reports:

The National Crime Agency has today published research into how and why some young people become involved in cyber crime.

The report, which is based on debriefs with offenders and those on the fringes of criminality, explores why young people assessed as unlikely to commit more traditional crimes get involved in cyber crime.

The report emphasises that financial gain is not necessarily a priority for young offenders. Instead, the sense of accomplishment at completing a challenge, and proving oneself to peers in order to increase online reputations are the main motivations for those involved in cyber criminality.

Government agencies, like the FBI for example, are full of lifers who take their breaks at precisely 14:15 PM, have their favorite parking spots, play endless office politics, masters of passive-aggression, who make government and/or corporate work too painful to contemplate for young cybersecurity talent.

In short, a lack of meaningful peer respect and a sense of accomplishment is defeating both government and private hiring of cybersecurity talent.

Read Pathways Into Cyber Crime and evaluate how the potential young hires in there would react to your staff meetings and organizational structure.

That bad? Wow, you are worse off than I thought.

So, are you going to keep with your certificate-driven, cubicle-based, Dilbert-like cybersecurity effort?

How’s that working out for you?

You will have to take risks to find better solutions but you are losing already. Enough to chance a different approach?

Shortfall in Cypbersecurity Talent or Compensation?

April 21st, 2017

Federal effort is needed to address shortfall in cybersecurity talent by Mike McConnell and Judy Genshaft.

If you want to get in on cybersecurity training scam business, there are a number of quotes you can lift from this post. Consider:

Our nation is under attack. Every day, thousands of entities – private enterprises, public institutions and individual citizens—have their computer networks breached, their systems hacked and their data stolen, degraded or destroyed. Such critical infrastructure impacts the cyber-sanctity of our banking system and electric power grid, each vital to our national security. We believe systemically developing more skilled cybersecurity defenders is the essential link needed to protect our nation from ‘bad actors’’ who would exploit our vital systems.

In its latest global survey, the Information Security Certification Consortium (ISC²) projects a cybersecurity talent shortfall of as much as 1.8 million professionals by 2022­­. This shortage in skilled cybersecurity professionals means that all data and digital systems are at risk. Closing the cyber talent gap will require sustained and concerted efforts of government, the private sector, and educational institutions at all levels.

If you don’t already know that hacking increases every year, spend some time at: Hackmageddon. Or with any security report on hacking.

Think about it. How does cybercrime keep increase during a shortfall of cybersecurity talent?

Answer: It doesn’t. Plenty of cybersecurity talent, just a shortfall on one side of the picture.

A “shortfall,” if you want to call it that, caused by low wages and unreasonable working conditions (no weed, even off the job).

All calls for more cybersecurity talent emphasize being on the “right side,” protecting your country, the system, etc., all BS that you can’t put in the bank.

If you want better cybersecurity, have aggressive compensation packages, very flexible working conditions. The talent is out there, it’s just not free. (Nor should it be.)

Leak “Threatens Windows Users Around The World?”

April 20th, 2017

Leaked NSA Malware Threatens Windows Users Around The World? by Sam Biddle.

Really? Shadow Brokers leaking alleged NSA malware “threatens users around the world?”

Hmmm, I would think that the NSA developing Windows malware is what threatens users around the world.

Yes?

Unlike the apparent industry concealment of vulnerabilities, the leaking of NSA malware puts all users on an equal footing with regard to those vulnerabilities.

In a phrase, users are better off for the NSA malware leak than they were before.

They know (or at least it has been alleged) that these leaked vulnerabilities have been patched in supported Microsoft products. By upgrading to those products, they can avoid these particular pieces of NSA malware.

Leaking vulnerabilities enables users to avoid perils themselves, in this case by upgrading, and/or to demand patches from vendors responsible for the vulnerabilities.

Do you see a downside I don’t?

Well, aside from trashing the market for vulnerabilities and gelding security agencies, neither one of which I will lose any sleep over.

Black Womxn Authors, Library of Congress and MarcXML (Part 2)

April 20th, 2017

(After writing this post I got a message from Clifford Anderson on a completely different way to approach the Marc to XML problem. A very neat way. But, I thought the directions on installing MarcEdit on Ubuntu 16.04 would be helpful anyway. More on Clifford’s suggestion to follow.)

If your just joining, read Black Womxn Authors, Library of Congress and MarcXML (Part 1) for the background on why this flurry of installation is at all meaningful!

The goal is to get a working copy of MarcEdit installed on my Ubuntu 16.04 machine.

MarcEdit Linux Installation Instructions reads in part:

Installation Steps:

  1. Download the MarcEdit app bundle. This file has been zipped to reduce the download size. http://marcedit.reeset.net/software/marcedit.bin.zip
  2. Unzip the file and open the MarcEdit folder. Find the Install.txt file and read it.
  3. Ensure that you have the Mono framework installed. What is Mono? Mono is an open source implementation of Microsoft’s .NET framework. The best way to describe it is that .NET is very Java-like; it’s a common runtime that can work across any platform in which the framework has been installed. There are a number of ways to get the Mono framework — for MarcEdit’s purposes, it is recommended that you download and install the official package available from the Mono Project’s website. You can find the Mac OSX download here: http://www.go-mono.com/mono-downloads/download.html
  4. Run MarEdit via the command-line using mono MarcEdit.exe from within the MarcEdit directory.

Well, sort of. 😉

First, you need to go to the Mono Project Download page. From there, under Xamarin packages, follow Debian, Ubuntu, and derivatives.

There is a package for Ubuntu 16.10, but it’s Mono 4.2.1. By installing the Xamarin packages, I am running Mono 4.7.0. Your call but as a matter of habit, I run the latest compatible packages.

Updating your package lists for Debian, Ubuntu, and derivatives:

Add the Mono Project GPG signing key and the package repository to your system (if you don’t use sudo, be sure to switch to root):

sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF

echo "deb http://download.mono-project.com/repo/debian wheezy main" | sudo tee /etc/apt/sources.list.d/mono-xamarin.list

And for Ubuntu 16.10:

echo "deb http://download.mono-project.com/repo/debian wheezy-apache24-compat main" | sudo tee -a /etc/apt/sources.list.d/mono-xamarin.list

Now run:

sudo apt-get update

The Usage section suggests:

The package mono-devel should be installed to compile code.

The package mono-complete should be installed to install everything – this should cover most cases of “assembly not found” errors.

The package referenceassemblies-pcl should be installed for PCL compilation support – this will resolve most cases of “Framework not installed: .NETPortable” errors during software compilation.

The package ca-certificates-mono should be installed to get SSL certificates for HTTPS connections. Install this package if you run into trouble making HTTPS connections.

The package mono-xsp4 should be installed for running ASP.NET applications.

Find and select mono-complete first. Most decent package managers will show dependencies that will be installed. Add any of these that were missed.

Do follow the hints here to verify that Mono is working correctly.

Are We There Yet?

Not quite. It was at this point that I unpacked http://marcedit.reeset.net/software/marcedit.bin.zip and discovered there is no “Install.txt file.” Rather there is a linux_install.txt, which reads:

a) Ensure that the dependencies have been installed
1) Dependency list:
i) MONO 3.4+ (Runtime plus the System.Windows.Forms library [these are sometimes separate])
ii) YAZ 5 + YAZ 5 develop Libraries + YAZ++ ZOOM bindings
iii) ZLIBC libraries
iV) libxml2/libxslt libraries
b) Unzip marcedit.zip
c) On first run:
a) mono MarcEdit.exe
b) Preferences tab will open, click on other, and set the following two values:
i) Temp path: /tmp/
ii) MONO path: [to your full mono path]

** For Z39.50 Support
d) Yaz.Sharp.dll.config — ensure that the dllmap points to the correct version of the shared libyaz object.
e) main_icon.bmp can be used for a desktop icon

Opps! Without unzipping marcedit.zip, you won’t see the dependencies:

ii) YAZ 5 + YAZ 5 develop Libraries + YAZ++ ZOOM bindings
iii) ZLIBC libraries
iV) libxml2/libxslt libraries

The YAZ site has a readme file for Ubuntu, but here is the very abbreviated version:


wget http://ftp.indexdata.dk/debian/indexdata.asc
sudo apt-key add indexdata.asc

echo "deb http://ftp.indexdata.dk/ubuntu xenial main" | sudo tee -a /etc/apt/sources.list
echo "deb-src http://ftp.indexdata.dk/ubuntu xenial main" | sudo tee -a /etc/apt/sources.list

(That sequence only works for Ubuntu xenial. See the readme file for other versions.)

Of course:

sudo apt-get update

As of of today, you are looking for yaz 5.21.0-1 and libyaz5-dev 5.21.0-1.

Check for and/or install ZLIBC and libxml2/libxslt libraries.

Personal taste but I reboot at this point to make sure all the libraries re-load to the correct versions, etc. Should work without rebooting but that’s up to you.

Fire it up with

mono MarcEdit.ext

Choose Locations (not Other) and confirm “Set Temporary Path:” is /tmp/ and MONO Path (the location of mono, try which mono, input the results and select OK.

I did the install on Sunday evening and so after all this, the software on loading announces it has been ungraded! Yes, while I was installing all the dependencies, a new and improved version of MarcEdit was posted.

The XML extraction is a piece of cake so I am working on the XQuery on the resulting MarcXML records for part 3.

Conclusive Reason To NOT Use Gmail

April 20th, 2017

Using an email service, Gmail for example, that tracks (and presumably reads) your incoming and outgoing mail is poor security judgement.

Following a California magistrate ruling on 19 April 2017, it’s suicidal.

Shaun Nichols covers the details in Nuh-un, Google, you WILL hand over emails stored on foreign servers, says US judge.

But the only part of the decision that should interest you reads:


The court denies Google’s motion to quash the warrant for content that it stores outside the United States and orders it to produce all content responsive to the search warrant that is retrievable from the United States, regardless of the data’s actual location.

Beeler takes heart from the dissents in In the Matter of a Warrant to Search a Certain E-Mail Account Controlled & Maintained by Microsoft Corp., 829 F.3d 197 (2d Cir. 2016), reh’g denied en banc, No. 14-2985, 2017 WL 362765 (2d Cir. Jan. 24, 2017), to find if data isn’t intentionally stored outside the US, and can be accessed from within the US, then its subject to a warrant under 18 U.S.C. § 2703(a), the Stored Communications Act (“SCA”).

I have a simpler perspective: Do you want to risk fortune and freedom on a how many angels can dance on the head of 18 U.S.C. § 2703(a), the Stored Communications Act (“SCA”) questions?

If your answer is no, don’t use Gmail. Or any other service where data can be accessed from United States for 18 U.S.C. § 2703(a), but similar statutes for other jurisdictions.

For that matter, prudent users restrict themselves to Tor based mail services and always use strong encryption.

Almost any communication can be taken as a crime or step in a conspiracy by a prosecutor inclined to do so.

The only partially safe haven is silence. (Where encryption and/or inability to link you to the encrypted communication = silence.)

Who Prefers Zero Days over 7 Year Old Bugs? + Legalization of Hacking

April 20th, 2017

“Who” is not clear but Dan Goodin reports in Windows bug used to spread Stuxnet remains world’s most exploited that:

One of the Microsoft Windows vulnerabilities used to spread the Stuxnet worm that targeted Iran remained the most widely exploited software bug in 2015 and 2016 even though the bug was patched years earlier, according to a report published by antivirus provider Kaspersky Lab.

In 2015, 27 percent of Kaspersky users who encountered any sort of exploit were exposed to attacks targeting the critical Windows flaw indexed as CVE-2010-2568. In 2016, the figure dipped to 24.7 percent but still ranked the highest. The code-execution vulnerability is triggered by plugging a booby-trapped USB drive into a vulnerable computer. The second most widespread exploit was designed to gain root access rights to Android phones, with 11 percent in 2015 and 15.6 percent last year.

A market share of almost 25%, despite being patched in 2010, marks CVE-2010-2568 as one of the top bugs a hacker should have in their toolkit.

Not to denigrate finding zero day flaws in vibrators and other IoT devices, or more exotic potential exploits in the Linux kernel but if you approach hacking as an investment, the “best” tools aren’t always the most recent ones. (“Best” defined as the highest return for mastery and use.)

Looking forward to the legalization of hacking, unauthorized penetration of information systems, with civil and criminal penalties for owners of those systems who get hacked.

I suggest that because hacking being illegal, has done nothing to stem the tide of hacking. Mostly because threatening people you can’t find or who think they won’t be found, is by definition, ineffectual.

Making hacking legal and penalizing business interests who get hacked, is a threat against people you can find on a regular basis. They pay taxes, register their stocks, market their products.

Speaking of paying taxes, there could be an OS upgrade tax credit. Something to nudge all the Windows XP, Vista, 7 instances out of existence. That alone would be the largest single improvement in cybersecurity since that because a term.

Legalized, hackers would provide a continuing incentive (fines and penalties) for better software and more consistent upgrade practices. Take advantage of that large pool of unpaid but enthusiastic labor (hackers).

Dive Into NLTK – Update – No NLTK Book 2nd Edition

April 19th, 2017

Dive Into NLTK, Part I: Getting Started with NLTK

From the webpage:

NLTK is the most famous Python Natural Language Processing Toolkit, here I will give a detail tutorial about NLTK. This is the first article in a series where I will write everything about NLTK with Python, especially about text mining and text analysis online.

This is the first article in the series “Dive Into NLTK”, here is an index of all the articles in the series that have been published to date:

Part I: Getting Started with NLTK (this article)
Part II: Sentence Tokenize and Word Tokenize
Part III: Part-Of-Speech Tagging and POS Tagger
Part IV: Stemming and Lemmatization
Part V: Using Stanford Text Analysis Tools in Python
Part VI: Add Stanford Word Segmenter Interface for Python NLTK
Part VII: A Preliminary Study on Text Classification
Part VIII: Using External Maximum Entropy Modeling Libraries for Text Classification
Part IX: From Text Classification to Sentiment Analysis
Part X: Play With Word2Vec Models based on NLTK Corpus

My first post on this series, had only the first seven lessons listed.

There’s another reason for this update.

It appears that no second edition of Natural Language Processing with Python is likely to appear.

Sounds like an opportunity for the NLTK community to continue the work already started.

I don’t have the chops to contribute high quality code but would be willing to work with others on proofing/editing (that’s the part of book production readers rarely see).

Shadow Brokers Compilation Dates

April 19th, 2017

ShadowBrokers EquationGroup Compilation Timestamp Observation

From the post:

I looked at the IOCs @GossiTheDog ‏posted, looked each up in virus total and dumped the compilation timestamp into a spreadsheet.

To step back a second, the Microsoft Windows compiler embeds the date and time that the given .exe or .dll was compiled. Compilation time is a very useful characteristic of Portable Executable. Malware authors could zero it or change it to a random value, but I’m not sure there is any indication of that here. If the compilation timestamps are real, then there’s an interesting observation in this dataset.

A very clever observation! Check time stamps for patterns!

Enables an attentive reader to ask:

  1. Where the Shadow Broker exploits stolen prior to 2013-08-22?
  2. If no to #1, where are the exploits post 2013-08-22?

Have dumps so far been far away lightning that precedes very close thunderclaps?

Imagine compilation timestamps in 2014, 2015, or even 2016?

Listen for Shadow Brokers to roar!

Building a Keyword Monitoring Pipeline… (Think Download Before Removal)

April 19th, 2017

Building a Keyword Monitoring Pipeline with Python, Pastebin and Searx by Justin Seitz.

From the post:

Having an early warning system is an incredibly useful tool in the OSINT world. Being able to monitor search engines and other sites for keywords, IP addresses, document names, or email addresses is extremely useful. This can tell you if an adversary, competitor or a friendly ally is talking about you online. In this blog post we are going to setup a keyword monitoring pipeline so that we can monitor both popular search engines and Pastebin for keywords, leaked credentials, or anything else we are interested in.

The pipeline will be designed to alert you whenever one of those keywords is discovered or if you are seeing movement for a keyword on a particular search engine.

Learning of data that was posted but is no longer available, is a sad thing.

Increase your odds of grabbing data before removal by following Justin’s post.

A couple of caveats:

  • I would not use GMail, preferring a Tor mail solution, especially for tracking Pastebin postings.
  • Use and rotate at random VPN connections for your Searx setup.

Going completely dark takes more time and effort than most of us can spare, but you can avoid being like a new car dealership with search lights crossing the sky.

Pure CSS crossword – CSS Grid

April 19th, 2017

Pure CSS crossword – CSS Grid by Adrian Roworth.

The UI is slick, although creating the puzzle remains on you.

Certainly suitable for string answers, XQuery/XPath/XSLT expressions, etc.

Enjoy!

An Initial Reboot of Oxlos

April 18th, 2017

An Initial Reboot of Oxlos by James Tauber.

From the post:

In a recent post, Update on LXX Progress, I talked about the possibility of putting together a crowd-sourcing tool to help share the load of clarifying some parse code errors in the CATSS LXX morphological analysis. Last Friday, Patrick Altman and I spent an evening of hacking and built the tool.

Back at BibleTech 2010, I gave a talk about Django, Pinax, and some early ideas for a platform built on them to do collaborative corpus linguistics. Patrick Altman was my main co-developer on some early prototypes and I ended up hiring him to work with me at Eldarion.

The original project was called oxlos after the betacode transcription of the Greek word for “crowd”, a nod to “crowd-sourcing”. Work didn’t continue much past those original prototypes in 2010 and Pinax has come a long way since so, when we decided to work on oxlos again, it made sense to start from scratch. From the initial commit to launching the site took about six hours.

At the moment there is one collective task available—clarifying which of a set of parse codes is valid for a given verb form in the LXX—but as the need for others arises, it will be straightforward to add them (and please contact me if you have similar tasks you’d like added to the site).
… (emphasis in the original)

Crowd sourcing, parse code errors in the CATSS LXX morphological analysis, Patrick Altman and James Tauber! What’s more could you ask for!

Well, assuming you enjoy Django development, https://github.com/jtauber/oxlos2 or have Greek morphology, sign up at: http://oxlos.org/.

After mastering Greek, you don’t really want to lose it from lack of practice. Yes? Perfect opportunity for recent or even not so recent Classics and divinity majors.

I suppose that’s a nice way to say you won’t be encountering LXX Greek on ESPN or CNN. 😉

D3 in Depth – Update

April 18th, 2017

D3 in Depth by Peter Cook

Peter has added three more chapters since my last visit:

There are another eight (8) to go.

I don’t know about you or Peter, but when people are showing interest in my work, I tend to work more diligently on it.

Drop by, ask questions, make suggestions.

Enjoy!

XSL Transformations (XSLT) Version 3.0 (Proposed Recommendation 18 April 2017)

April 18th, 2017

XSL Transformations (XSLT) Version 3.0 (Proposed Recommendation 18 April 2017)

Michael Kay tweeted today:

XSLT 3.0 is a Proposed Recommendation: https://www.w3.org/TR/xslt-30/ It’s taken ten years but we’re nearly there!

Congratulations to Michael and the entire team!

What’s new?

A major focus for enhancements in XSLT 3.0 is the requirement to enable streaming of source documents. This is needed when source documents become too large to hold in main memory, and also for applications where it is important to start delivering results before the entire source document is available.

While implementations of XSLT that use streaming have always been theoretically possible, the nature of the language has made it very difficult to achieve this in practice. The approach adopted in this specification is twofold: it identifies a set of restrictions which, if followed by stylesheet authors, will enable implementations to adopt a streaming mode of operation without placing excessive demands on the optimization capabilities of the processor; and it provides new constructs to indicate that streaming is required, or to express transformations in a way that makes it easier for the processor to adopt a streaming execution plan.

Capabilities provided in this category include:

  • A new xsl:source-document instruction, which reads and processes a source document, optionally in streaming mode;
  • The ability to declare that a mode is a streaming mode, in which case all the template rules using that mode must be streamable;
  • A new xsl:iterate instruction, which iterates over the items in a sequence, allowing parameters for the processing of one item to be set during the processing of the previous item;
  • A new xsl:merge instruction, allowing multiple input streams to be merged into a single output stream;
  • A new xsl:fork instruction, allowing multiple computations to be performed in parallel during a single pass through an input document.
  • Accumulators, which allow a value to be computed progressively during streamed processing of a document, and accessed as a function of a node in the document, without compromise to the functional nature of the XSLT language.

A second focus for enhancements in XSLT 3.0 is the introduction of a new mechanism for stylesheet modularity, called the package. Unlike the stylesheet modules of XSLT 1.0 and 2.0 (which remain available), a package defines an interface that regulates which functions, variables, templates and other components are visible outside the package, and which can be overridden. There are two main goals for this facility: it is designed to deliver software engineering benefits by improving the reusability and maintainability of code, and it is intended to streamline stylesheet deployment by allowing packages to be compiled independently of each other, and compiled instances of packages to be shared between multiple applications.

Other significant features in XSLT 3.0 include:

  • An xsl:evaluate instruction allowing evaluation of XPath expressions that are dynamically constructed as strings, or that are read from a source document;
  • Enhancements to the syntax of patterns, in particular enabling the matching of atomic values as well as nodes;
  • An xsl:try instruction to allow recovery from dynamic errors;
  • The element xsl:global-context-item, used to declare the stylesheet’s expectations of the global context item (notably, its type).
  • A new instruction xsl:assert to assist developers in producing correct and robust code.

XSLT 3.0 also delivers enhancements made to the XPath language and to the standard function library, including the following:

  • Variables can now be bound in XPath using the let expression.
  • Functions are now first class values, and can be passed as arguments to other (higher-order) functions, making XSLT a fully-fledged functional programming language.
  • A number of new functions are available, for example trigonometric functions, and the functions parse-xmlFO30 and serializeFO30 to convert between lexical and tree representations of XML.

XSLT 3.0 also includes support for maps (a data structure consisting of key/value pairs, sometimes referred to in other programming languages as dictionaries, hashes, or associative arrays). This feature extends the data model, provides new syntax in XPath, and adds a number of new functions and operators. Initially developed as XSLT-specific extensions, maps have now been integrated into XPath 3.1 (see [XPath 3.1]). XSLT 3.0 does not require implementations to support XPath 3.1 in its entirety, but it does requires support for these specific features.

This will remain a proposed recommendation until 1 June 2017.

How close can you read? 😉

Enjoy!

Every NASA Image In One Archive – Crowd Sourced Index?

April 17th, 2017

NASA Uploaded Every Picture It Has to One Amazing Online Archive by Will Sabel Courtney.

From the post:

Over the last five decades and change, NASA has launched hundreds of men and women from the planet’s surface into the great beyond. But America’s space agency has had an emotional impact on millions, if not billions, of others who’ve never gone past the Karmann Line separating Earth from space, thanks to the images, audio, and video generated by its astronauts and probes. NASA has given us our best glimpses at distant galaxies and nearby planets—and in the process, helped up appreciate our own world even more.

And now, the agency has placed them all in one place for everyone to see: images.nasa.gov.

No, viewing this site will not be considered an excuse for a late tax return. 😉

On the other hand, it’s an impressive bit of work, although a search only interface seems a bit thin to me.

The API docs don’t offer much comfort:

Name Description
q (optional) Free text search terms to compare to all 
indexed metadata.
center (optional) NASA center which published the media.
description(optional) Terms to search for in “Description” fields.
keywords (optional) Terms to search for in “Keywords” fields. 
Separate multiple values with commas.
location (optional) Terms to search for in “Location” fields.
media_type(optional) Media types to restrict the search to. 
Available types: [“image”, “audio”]. 
Separate multiple values with commas.
nasa_id (optional) The media asset’s NASA ID.
photographer(optional) The primary photographer’s name.
secondary_creator(optional) A secondary photographer/videographer’s name.
title (optional) Terms to search for in “Title” fields.
year_start (optional) The start year for results. Format: YYYY.
year_end (optional) The end year for results. Format: YYYY.

With no index, your results depend on your blind guessing the metadata entered by a NASA staffer.

Well, for “moon” I would expect “the Moon,” but the results are likely to include moons of other worlds, etc.

Indexing this collection has all the marks of a potential crowd sourcing project:

  1. Easy to access data
  2. Free data
  3. Interesting data
  4. Metadata

Interested?

More Leveling – Undetectable Phishing Attack

April 17th, 2017

Chrome, Firefox, and Opera Vulnerable to Undetectable Phishing Attack by Catalin Cimpanu.

From the post:

Browsers such as Chrome, Firefox, and Opera are vulnerable to a new variation of an older attack that allows phishers to register and pass fake domains as the websites of legitimate services, such as Apple, Google, eBay, and others.

Discovered by Chinese security researcher Xudong Zheng, this is a variation of a homograph attack, first identified by Israeli researchers Evgeniy Gabrilovich and Alex Gontmakher, and known since 2001.

This particular hack depends upon variant characters being available within one language set, which avoids characters from different languages (deemed phishing attempts).

To make this work, you will need a domain name written using Punycode (RFC 3492), which enables the writing of Unicode in ASCII.

There’s a task for deep learning, scanning the Unicode Code Charts for characters that are easy to confuse with ASCII characters.

If you have a link to such results, ping me with it.

Black Womxn Authors, Library of Congress and MarcXML (Part 1)

April 17th, 2017

This adventure started innocently enough with the 2017 Womxn of Color Reading Challenge by Der Vang. As an “older” White male Southerner working in technology, I don’t encounter works by womxn of color unless it is intentional.

The first book, “A book that became a movie,” was easy. I read the deeply moving Beloved by Toni Morrison. I recommend reading a non-critical edition before you read a critical one. Let Morrison speak for herself before you read others offering their views on the story.

The second book, “A book that came out the year you were born,” have proven to be more difficult. Far more difficult. You see I think Der Vang was assuming a reading audience younger than I am, for which womxn of color authors would not be difficult to find. That hasn’t proven to be the case for me.

I searched the usual places but likely collections did not denote an author’s gender or race. The Atlanta-Fulton Public Library reference service came riding to the rescue after I had exhausted my talents with this message:

‘Attached is a “List of Books Published by Negro Writers in 1954 and Late 1953” (pp. 10-12) by Blyden Jackson, IN “The Blithe Newcomers: Resume of Negro Literature in 1954: Part I,” Phylon v.16, no.1 (1st Quarter 1955): 5-12, which has been annotated with classifications (Biography) or subjects (Poetry). Thirteen are written by women; however, just two are fiction. The brief article preceding the list does not mention the books by the women novelists–Elsie Jordan (Strange Sinner) or Elizabeth West Wallace (Scandal at Daybreak). No Part II has been identified. And AARL does not own these two. Searching AARL holdings in Classic Catalog by year yields seventeen by women but no fiction. Most are biographies. Two is better than none but not exactly a list.

A Celebration of Women Writers – African American Writers (http://digital.library.upenn.edu/women/_generate/
AFRICAN%20AMERICAN.html
) seems to have numerous [More Information] links which would possibly allow the requestor to determine the 1954 novelists among them.’
(emphasis in original)

Using those two authors/titles as leads, I found in the Library of Congress online catalog:

https://lccn.loc.gov/54007603
Jordan, Elsie. Strange sinner / Elsie Jordan. 1st ed. New York : Pageant, c1954.
172 p. ; 21 cm.
PZ4.J818 St

https://lccn.loc.gov/54012342
Wallace, Elizabeth West. [from old catalog] Scandal at daybreak. [1st ed.] New York, Pageant Press [1954]
167 p. 21 cm.
PZ4.W187 Sc

Checking elsewhere, both titles are out of print, although I did see one (1) copy of Elise Jordan’s Strange Sinner for $100. I think I have located a university with a digital scan but will have to report back on that later.

Since both Jordan and Wallace published with Pageant Press the same year, I reasoned that other womxn of color may have also published with them and that could lead me to more accessible works.

Experienced librarians are no doubt already grinning because if you search for “Pageant Press,” with the Library of Congress online catalog, you get 961 “hits,” displayed 25 “hits” at a time. Yes, you can set the page to return 100 “hits at a time, but not while you have sort by date of publication selected. 🙁

That is you can display 100 “hits” per page in no particular order, or, you can display the “hits” in date of publication order, but only 25 “hits” at a time. (Or at least that was my experience, please correct me if that’s wrong.)

But, with the 100 “hits” per page, you can “save as,” but only as Marc records, Unicode (UTF-8) or not. No MarcXML format.

In the response to my query about the same, the response from the Library of Congress reads:

At the moment we have no plans to provide an option to save search results as MARCXML. We will consider it for future development projects.

I can understand that in the current climate in Washington but a way to convert Marc records to the easier (in my view) to manipulate MarcXMLformat, would be a real benefit to readers and researchers alike.

Fortunately there is a solution, MarcEdit.

From the webpage:

This LibGuide attempts to document the features of MarcEdit, which was developed by Terry Reese. It is open source software designed to facilitate the harvesting, editing, and creation of MARC records. This LibGuide was adapted from a standalone document, and while the structure of the original document has been preserved in this LibGuide, it is also available in PDF form at the link below. The original documentation and this LibGuide were written with the idea that it would be consulted on an as-needed basis. As a result, the beginning steps of many processes may be repeated within the same page or across the LibGuide as a whole so that users would be able to understand the entire process of implementing a function within MarcEdit without having to consult other guides to know where to begin. There are also screenshots that are repeated throughout, which may provide a faster reference for users to understand what steps they may already be familiar with.

Of course, installing MarcEdit on Ubuntu, isn’t a straightforward task. But I have 961 Marc records and possibly more that would be very useful in MarcXML. Tomorrow I will document the installation steps I followed with Ubuntu 16.04.

PS: I’m not ignoring the suggested A Celebration of Women Writers – African American Writers (http://digital.library.upenn.edu/women/_generate/
AFRICAN%20AMERICAN.html)
. But I have gotten distracted by the technical issue of how to convert all the holdings at the Library of Congress for a publisher into MarcXML. Suggestions on how to best use this resource?

Shadow Brokers Level The Playing Field

April 17th, 2017

The whining and moaning from some security analysts over Shadow Broker dumps is a mystery to me.

Apologies for the pie chart, but the blue area represents the widely vulnerable population pre-Shadow Brokers leak:

I’m sorry, you can’t really see the 0.01% or less, who weren’t vulnerable pre-Shadow Brokers leak. Try this enlargement:

Shadow Brokers, especially if they leak more current tools, are leveling the playing field for the average user/hacker.

Instead of 99.99% of users being in danger from people who buy/sell zero-day exploits, some governments and corporations, now it is closer to 100% of all users who are in danger.

Listen to them howl!

Was was not big deal, since people with power could hack the other 99.99% of us, certainly is now a really big deal.

Maybe we will see incentives for more secure software when everyone and I mean everyone is at equal risk.

Help Shadow Brokers level the security playing field.

A post on discovery policy for vulnerabilities promotes user equality.

Do you favor user equality or some other social regime?

The Line Between Safety and Peril – (patched) “Supported Products”

April 15th, 2017

Dan Goodin in NSA-leaking Shadow Brokers just dumped its most damaging release yet reports in part:


Friday’s release—which came as much of the computing world was planning a long weekend to observe the Easter holiday—contains close to 300 megabytes of materials the leakers said were stolen from the NSA. The contents (a convenient overview is here) included compiled binaries for exploits that targeted vulnerabilities in a long line of Windows operating systems, including Windows 8 and Windows 2012. It also included a framework dubbed Fuzzbunch, a tool that resembles the Metasploit hacking framework that loads the binaries into targeted networks.

Independent security experts who reviewed the contents said it was without question the most damaging Shadow Brokers release to date.
“It is by far the most powerful cache of exploits ever released,” Matthew Hickey, a security expert and co-founder of Hacker House, told Ars. “It is very significant as it effectively puts cyber weapons in the hands of anyone who downloads it. A number of these attacks appear to be 0-day exploits which have no patch and work completely from a remote network perspective.”

News of the release has been fanned by non-technical outlets, such as CNN Tech, NSA’s powerful Windows hacking tools leaked online by Selena Larson.

Microsoft has responded with: Protecting customers and evaluating risk:

Today, Microsoft triaged a large release of exploits made publicly available by Shadow Brokers. Understandingly, customers have expressed concerns around the risk this disclosure potentially creates. Our engineers have investigated the disclosed exploits, and most of the exploits are already patched. Below is our update on the investigation.

Code Name Solution
EternalBlue Addressed by MS17-010
EmeraldThread Addressed by MS10-061
EternalChampion Addressed by CVE-2017-0146 & CVE-2017-0147
“ErraticGopher” Addressed prior to the release of Windows Vista
EsikmoRoll Addressed by MS14-068
EternalRomance Addressed by MS17-010
EducatedScholar Addressed by MS09-050
EternalSynergy Addressed by MS17-010
EclipsedWing Addressed by MS08-067

Of the three remaining exploits, “EnglishmanDentist”, “EsteemAudit”, and “ExplodingCan”, none reproduces on supported platforms, which means that customers running Windows 7 and more recent versions of Windows or Exchange 2010 and newer versions of Exchange are not at risk. Customers still running prior versions of these products are encouraged to upgrade to a supported offering.
… (emphasis in original)

You are guaranteed to be in peril if you are not running patched, supported Microsoft products.

Even if you are running a supported product, know that 50% of all vulnerabilities are from failure to apply patches.

Unlike the hackers who may be in your system right now, liability of vendors for unreasonably poor coding practices or your company for data breaches caused by your practices, such as failure to apply patches, would be incentives for more secure software and better security practices.

If you are serious about cybersecurity, focus on people you can reach and not those you encounter at random (hackers).