Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

April 30, 2017

Tor 0.3.0.6 is released: a new series is stable!

Filed under: Cybersecurity,Tor — Patrick Durusau @ 7:47 pm

Tor 0.3.0.6 is released: a new series is stable!

From the post:

Tor 0.3.0.6 is the first stable release of the Tor 0.3.0 series.

With the 0.3.0 series, clients and relays now use Ed25519 keys to authenticate their link connections to relays, rather than the old RSA1024 keys that they used before. (Circuit crypto has been Curve25519-authenticated since 0.2.4.8-alpha.) We have also replaced the guard selection and replacement algorithm to behave more robustly in the presence of unreliable networks, and to resist guard- capture attacks.

This series also includes numerous other small features and bugfixes, along with more groundwork for the upcoming hidden-services revamp.

Per our stable release policy, we plan to support the Tor 0.3.0 release series for at least the next nine months, or for three months after the first stable release of the 0.3.1 series: whichever is longer. If you need a release with long-term support, we recommend that you stay with the 0.2.9 series.

If you build Tor from source, you can find it at the usual place on the website. Packages should be ready over the next weeks, with a Tor Browser release in late May or early June.

Below are the changes since 0.2.9.10. For a list of only the changes since 0.3.0.5-rc, see the ChangeLog file.

I’ve been real lazy with Tor, waiting for packages, etc.

Not that I can “proof” the code but I should at least be building from sources.

Good practice if nothing else.

I’ll take a shot at building from source for Ubuntu 16.04 this week and report on how it goes.

April 28, 2017

Alert! Alert! Good Use For Cat Videos!

Filed under: Browsers,Cybersecurity,Privacy — Patrick Durusau @ 4:36 pm

A Trick That Hides Censored Websites Inside Cat Videos by Kaveh Waddell.

From the post:

A pair of researchers behind a system for avoiding internet censorship wants to deliver banned websites inside of cat videos. Their system uses media from popular, innocuous websites the way a high schooler might use the dust jacket of a textbook to hide the fact that he’s reading a comic book in class. To the overseeing authority—in the classroom, the teacher; on the internet, a government censor—the content being consumed appears acceptable, even when it’s illicit.

The researchers, who work at the University of Waterloo’s cryptography lab, named Slitheen after a race of aliens from Doctor Who who wear the skins of their human victims to blend in. The system uses a technique called decoy routing, which allows users to view blocked sites—like a social-networking site or a news site—while generating a browsing trail that looks exactly as if they were just browsing for shoes or watching silly videos on YouTube.

Slitheen’s defining feature is that the complex traffic it generates is indistinguishable from a normal request. That is, two computers sitting next to one another, downloading data from Amazon.com’s homepage—one that does so normally and another with the contents of this Atlantic story instead of Amazon’s images and videos—would create identical traffic patterns. The more complex Slitheen request would take slightly longer to come back, but its defining characteristics, from packet size to timing, would be the same.

How about that! With a clean local browser history as well.

After reading Waddell’s post, read Slitheen: Perfectly imitated decoy routing through traffic replacement, then grab the code at: https://crysp.uwaterloo.ca/software/slitheen/.

Talk up and recommend Slitheen to your friends, startups, ISPs, etc.

Imagine an Internet free of government surveillance. Doesn’t that sound enticing?

EU’s Unfunded Hear/See No Evil Policy

Filed under: Censorship,EU,Free Speech — Patrick Durusau @ 1:17 pm

EU lawmakers vote to make YouTube fight online hate speech by Julia Floretti.

From the post:

Video-sharing platforms such as Google’s YouTube and Vimeo will have to take measures to protect citizens from content containing hate speech and incitement to violence under measures voted by EU lawmakers on Tuesday.

The proliferation of hate speech and fake news on social media has led to companies coming under increased pressure to take it down quickly, while internet campaigners have warned an excessive crackdown could endanger freedom of speech.

Members of the culture committee in the European Parliament voted on a legislative proposal that covers everything from 30 percent quotas for European works on video streaming websites such as Netflix to advertising times on TV to combating hate speech.

Ironically, the reported vote was by the “CULT” committee. No, I’m not making that up! I can prove that from the documents page:

From the report,


Amendment 18

(28) Some of the content stored on video-sharing platforms is not under the editorial responsibility of the video-sharing platform provider. However, those providers typically determine the organisation of the content, namely programmes or user-generated videos, including by automatic means or algorithms. Therefore, those providers should be required to take appropriate measures to protect minors from content that may impair their physical, mental or moral development and protect all citizens from incitement to violence or hatred directed against a group of persons or a member of such a group defined by reference to sex, race, colour, religion, descent or national or ethnic origin.
… (emphasis in original)

In addition to being censorship, unfunded censorship at that, the EU report runs afoul of the racist reality of the EU.

If you’re up for some difficult reading, consider Intolerance, Prejudice and Discrimination – A European Report by Forum Berlin, Andreas Zick, Beate Küpper, and Andreas Hövermann.

From page 13 of the report:

  • Group-focused enmity is widespread in Europe. It is weakest in the Netherlands, and strongest in Poland and Hungary. With respect to anti-immigrant attitudes, anti-Muslim attitudes and racism there are only minor differences between the countries, while differences in the extent of anti-Semitism, sexism and homophobia are much more marked.
  • About half of all European respondents believe there are too many immigrants in their country. Between 17 percent in the Netherlands and more than 70 percent in Poland believe that Jews seek to benefit from their forebears’ suffering during the Nazi era. About one third of respondents believe there is a natural hierarchy of ethnicity. Half or more condemn Islam as “a religion of intolerance”. A majority in Europe also subscribe to sexist attitudes rooted in traditional gender roles and demand that: “Women should take their role as wives and mothers more seriously.” With a figure of about one third, Dutch respondents are least likely to affirm sexist attitudes. The proportion opposing equal rights for homosexuals ranges between 17 percent in the Netherlands and 88 percent in Poland; they believe it is not good “to allow marriages between two men or two women”.

At the risk of insulting our simian relatives, this new EU policy can be summarized by:

(source: Three Wise Monkeys)

Suppressing hate speech does not result in less hate, only in less evidence of it.

While this legislation is pending, YouTube and Vimeo should occasionally suspend access of EU viewers for an hour. EU voters may decide they need more responsible leadership.

4.5 Billion Forced To Boycott ‘Hack the Air Force’ (You Should Too)

Filed under: Cybersecurity,Government — Patrick Durusau @ 10:41 am

I mentioned in How Do Hackers Live on $53.57? (‘Hack the Air Force’) that only hackers in Australia, Cananda, New Zealand, the United Kingdom and United States can participate in ‘Hack the Air Force.’

For a rough count of those excluded, let’s limit hackers to being between the ages of 15 and 64. The World Bank puts that as 66% of the total population as of 2015.

OK, the World Population Clock gives a world population as of 28 April 2017 as 7,500,889,628.

Consulting the table for population by country, we find: Australia (25M), Cananda (37M), New Zealand (5M), the United Kingdom (66M) and United States (326M), for a total of 459 million.

Rounding the world’s population to 7,501,000,000, 66% of that population is 4,950,660,000 potential hackers world-wide, and from Australia, Cananda, New Zealand, the United Kingdom and United States, 283,140,000 potential hackers.

Hmmm,

Worldwide 4,950,660,000
AF Rules 283,140,000
Excluded 4,667,520,000

Not everyone between the ages of 15 and 64 is a hacker but raw numbers indicate a weakness in the US Air Force approach.

If ‘Hack the Air Force’ attracts any participants at all (participation is a bad idea, damages the cybersecurity labor market), those participants will be very similar to those who wrote the insecure systems for the Air Force.

The few participants will find undiscovered weaknesses. But the weaknesses they find will be those anyone similar to them would find. A lack of diversity in security testing is as serious a flaw as standard root passwords.

If you need evidence for the need for diversity in security testing, consider any of the bugs that are found post-appearance of any major software release. One assume that Microsoft, Oracle, Cisco, etc., don’t deliberately ignore major security flaws. Yet the headlines are filled with news of such flaws.

My explanation is that different people look for vulnerabilities differently and hence discover different vulnerabilities.

What’s yours?

As far as the ‘Hack the Air Force’ contest, my counsel is to boycott it along with all those forcibly excluded from participating.

The extreme lack of diversity in the hacking pool is a guarantee that post-contest, the public web systems of the US Air Force will remain insecure.

Moreover, it’s not in the interest of the cybersecurity defense community to encourage practices that damage the chances cybersecurity defense will become a viable occupation.

PS: Appeals to patriotism are amusing. The Air Force spent $billions constructing insecure systems. The people who built and maintain these insecure systems were/are paid a living wage. Having bought damaged goods, repeatedly and likely from the same people, what basis does the Air Force have to seek free advice and labor on its problems?

April 27, 2017

Facebook Used To Spread Propaganda (The other use of Facebook would be?)

Filed under: Facebook,Government,Journalism,News,Subject Identity,Topic Maps — Patrick Durusau @ 8:31 pm

Facebook admits: governments exploited us to spread propaganda by Olivia Solon.

From the post:

Facebook has publicly acknowledged that its platform has been exploited by governments seeking to manipulate public opinion in other countries – including during the presidential elections in the US and France – and pledged to clamp down on such “information operations”.

In a white paper authored by the company’s security team and published on Thursday, the company detailed well-funded and subtle techniques used by nations and other organizations to spread misleading information and falsehoods for geopolitical goals. These efforts go well beyond “fake news”, the company said, and include content seeding, targeted data collection and fake accounts that are used to amplify one particular view, sow distrust in political institutions and spread confusion.

“We have had to expand our security focus from traditional abusive behavior, such as account hacking, malware, spam and financial scams, to include more subtle and insidious forms of misuse, including attempts to manipulate civic discourse and deceive people,” said the company.

It’s a good white paper and you can intuit a lot from it, but leaks on the details of Facebook counter-measures have commercial value.

Careful media advisers will start farming Facebook users now for the US mid-term elections in 2018. One of the “tells” (a behavior that discloses, unintentionally, a player’s intent) of a “fake” account is recent establishment with many similar accounts.

Such accounts need to be managed so that their “identity” fits the statistical average for similar accounts. They should not all suddenly like a particular post or account, for example.

The doctrines of subject identity in topic maps, can be used to avoid subject recognition as well as to insure it. Just the other side of the same coin.

Coloring US Hacker Bigotry (Test Your Geographic Ignorance)

Filed under: Cybersecurity,Geography,Government — Patrick Durusau @ 4:22 pm

I failed to mention in How Do Hackers Live on $53.57? (‘Hack the Air Force’) that ‘Hack the Air Force’ is limited to hackers in Australia, Canada, New Zealand, and the United States (blue on the following map).

The dreaded North Korean hackers, the omnipresent Russian hackers (of Clinton fame), government associated Chinese hackers, not to mention the financially savvy East European hackers, and many others, are left out of this contest (red on the map).

The US Air Force is “fishing in the shallow end of the cybersecurity talent pool.”

I say this is “a partial cure for geographic ignorance,” because I started with the BlankMap-World4.svg map and proceeded in Gimp to fill in the outlines with appropriate colors.

There are faster map creation methods but going one by one, impressed upon me the need to improve my geographic knowledge!

April 26, 2017

How To Avoid Lying to Government Agents (Memorize)

Filed under: FBI,Government,Law — Patrick Durusau @ 7:58 pm

How to Avoid Going to Jail under 18 U.S.C. Section 1001 for Lying to Government Agents by Solomon L. Wisenberg.

Great post but Wisenberg buries his best advice twelve paragraphs into the story. (Starts with: “Is there an intelligent alternative to lying….”)

Memorize this sentence:

I will not answer any questions without first consulting an attorney.

That’s it. Short, sweet and to the point. Make no statements at all other than that one. No “I have nothing to hide,” etc.

It’s like name, rank, serial number you see in the old war movies. Don’t say anything other than that sentence.

For every statement a government agent makes, simply repeat that sentence. Remember, you can’t lie if you don’t say anything other than that sentence.

See Wisenberg’s post for the details but the highlighted sentence is the only one you need.

How Do Hackers Live on $53.57? (‘Hack the Air Force’)

Filed under: Cybersecurity,Government,Security — Patrick Durusau @ 4:27 pm

I ask because once you get past the glowing generalities of USAF Launches ‘Hack the Air Force’:

Let the friendly hacking fly: The US Air Force will allow vetted white hat hackers and other computer security specialists root out vulnerabilities in some of its main public websites.

You find:


Reina Staley, chief of staff for the Defense Digital Service, notes that white-hat hacking and crowdsourced security initiatives are often used used by small businesses and large companies to beef up their security. Payouts for Hack the Air Force will be made based on the severity of the exploit discovered, and there will be only one payout per exploit.

Staley notes that the DoD’s Hack the Pentagon initiative, which was launched in April 2016 by the Defense Digital Service, was the federal government’s first bug bounty program. More than 1,400 hackers registered to participate, and DoD paid $75,000 in bounties.

“In the past, we contracted to a security research firm and they found less than 20 unique vulnerabilities,” Staley explains. “For Hack the Pentagon, the 1,400 hackers found 138 unique vulnerabilities, most of them previously unknown.”

Kim says Hack the Air Force is all about being more proactive in finding security flaws and fixing them quickly. “While the money is a draw, we’re also finding that people want to participate in the program for patriotic reasons as well. People want to see the Internet and Armed Forces networks become safer,” he says.

Let’s see, $75,000 split between 1,400 hackers, that’s $53.57 per hacker, on average. Some got more than average, some got nothing at all.

‘Hack the Air Force’ damages the defensive cybersecurity labor market by driving down the compensation for cybersecurity skills. Skills that take time, hard work, talent to develop, but the Air Force devalues them with chump change.

I fully agree with anyone who says government, DoD or Air Force cybersecurity sucks.

However, the Air Force chose to spend money on valets, chauffeurs for its generals, fighter jets that randomly burst into flames, etc., just as they chose to neglect cybersecurity.

Not my decision, not my problem.

Want an effective solution?

First step, “…use the free market Luke!” Create an Air Force contact point where hackers can anonymously submit notices of vulnerabilities. Institute a reliable and responsive process that offers compensation (market-based compensation) for those finds. Compensation paid in bitcoins.

Bearing in mind that paying market rate and adhering to market reasonable responsiveness will be critical to success of such a portal. Yes, in a “huffy” voice, “you are the US Air Force,” but hackers will have something you need and cannot supply yourself. Live with it.

Second step, create a very “lite” contracting process when you need short-term cybersecurity audits or services. That means abandoning the layers of reports and graft of primes, sub-primes and sub-sub-primes, with all the feather nesting of contract officers, etc., along the way. Oh, drug tests as well. You want results, not squeaky clean but so-so hackers.

Third step, disclose vulnerabilities in other armed services, both domestic and foreign. Time spent hacking them is time not spent hacking you. Yes?

Until the Air Force stops damaging the defensive cybersecurity labor market, boycott the ‘Hack the Air Force’ at HackerOne and all similar efforts.

Is This Public Sector Corruption Map Knowingly False?

Filed under: Government,Journalism,News — Patrick Durusau @ 1:12 pm

The The New York Times, , Google and Facebook would all report no.

Knowingly false?

It uses the definition of “corruption” in McCutcheon v. Federal Election Comm’n, 134 S. Ct. 1434 (2014).

Chief Justice Roberts writing for the majority:


Moreover, while preventing corruption or its appearance is a legitimate objective, Congress may target only a specific type of corruption—“quid pro quo” corruption. As Buckley explained, Congress may permissibly seek to rein in “large contributions [that] are given to secure a political quid pro quo from current and potential office holders.” 424 U. S., at 26. In addition to “actual quid pro quo arrangements,” Congress may permissibly limit “the appearance of corruption stemming from public awareness of the opportunities for abuse inherent in a regime of large individual financial contributions” to particular candidates. Id., at 27; see also Citizens United, 558 U. S., at 359 (“When Buckley identified a sufficiently important governmental interest in preventing corruption or the appearance of corruption, that interest was limited to quid pro quo corruption”).

Spending large sums of money in connection with elections, but not in connection with an effort to control the exercise of an officeholder’s official duties, does not give rise to such quid pro quo corruption. Nor does the possibility that an individual who spends large sums may
garner “influence over or access to” elected officials or political parties. Id., at 359; see McConnell v. Federal Election Comm’n, 540 U.S. 93, 297 (2003) (KENNEDY, J., concurring in judgment in part and dissenting in part). And because the Government’s interest in preventing the
appearance of corruption is equally confined to the appearance of quid pro quo corruption, the Government may not seek to limit the appearance of mere influence or access. See Citizens United, 558 U. S., at 360.
… (page 20)

But with the same “facts,” if your definition of “quid pro quo” included campaign contributions, then this map is obviously false.

In fact, Christopher Robertson, D. Alex Winkelman, Kelly Bergstrand, and Darren Modzelewski, in The Appearance and the Reality of Quid Pro Quo Corruption: An Empirical Investigation Journal of Legal Analysis (2016) 8 (2): 375-438. DOI: https://doi.org/10.1093/jla/law006, conduct an empirical investigation into how jurors could view campaign contributions as “quid pro quo.”

Abstract:

The Supreme Court says that campaign finance regulations are unconstitutional unless they target “quid pro quo” corruption or its appearance. To test those appearances, we fielded two studies. First, in a highly realistic simulation, three grand juries deliberated on charges that a campaign spender bribed a Congressperson. Second, 1271 representative online respondents considered whether to convict, with five variables manipulated randomly. In both studies, jurors found quid pro quo corruption for behaviors they believed to be common. This research suggests that Supreme Court decisions were wrongly decided, and that Congress and the states have greater authority to regulate campaign finance. Prosecutions for bribery raise serious problems for the First Amendment, due process, and separation of powers. Safe harbors may be a solution.

Using Robertson, et al., “quid pro quo,” or even a more reasonable definition of “corruption:”

Transparency International defines corruption broadly as the abuse of entrusted power for private gain. (What is Public Sector Corruption?)

a re-colorization of the map shows a different reading of corruption in the United States:

Do you think the original map (top) is going to appear with warnings it depends on how you define corruption?

Or with a note saying a definition was chosen to conceal corruption of the US government?

I didn’t think so either.

PS: The U.S. has less minor corruption than many countries. The practice of and benefits from corruption are limited to the extremely wealthy.

April 24, 2017

Metron – A Fist Full of Subjects

Filed under: Cybersecurity,Security,Semantics,Subject Identity — Patrick Durusau @ 8:22 pm

Metron – Apache Incubator

From the description:

Metron integrates a variety of open source big data technologies in order to offer a centralized tool for security monitoring and analysis. Metron provides capabilities for log aggregation, full packet capture indexing, storage, advanced behavioral analytics and data enrichment, while applying the most current threat-intelligence information to security telemetry within a single platform.

Metron can be divided into 4 areas:

  1. A mechanism to capture, store, and normalize any type of security telemetry at extremely high rates. Because security telemetry is constantly being generated, it requires a method for ingesting the data at high speeds and pushing it to various processing units for advanced computation and analytics.
  2. Real time processing and application of enrichments such as threat intelligence, geolocation, and DNS information to telemetry being collected. The immediate application of this information to incoming telemetry provides the context and situational awareness, as well as the “who” and “where” information that is critical for investigation.
  3. Efficient information storage based on how the information will be used:
    1. Logs and telemetry are stored such that they can be efficiently mined and analyzed for concise security visibility
    2. The ability to extract and reconstruct full packets helps an analyst answer questions such as who the true attacker was, what data was leaked, and where that data was sent
    3. Long-term storage not only increases visibility over time, but also enables advanced analytics such as machine learning techniques to be used to create models on the information. Incoming data can then be scored against these stored models for advanced anomaly detection.
  4. An interface that gives a security investigator a centralized view of data and alerts passed through the system. Metron’s interface presents alert summaries with threat intelligence and enrichment data specific to that alert on one single page. Furthermore, advanced search capabilities and full packet extraction tools are presented to the analyst for investigation without the need to pivot into additional tools.

Big data is a natural fit for powerful security analytics. The Metron framework integrates a number of elements from the Hadoop ecosystem to provide a scalable platform for security analytics, incorporating such functionality as full-packet capture, stream processing, batch processing, real-time search, and telemetry aggregation. With Metron, our goal is to tie big data into security analytics and drive towards an extensible centralized platform to effectively enable rapid detection and rapid response for advanced security threats.

Some useful links:

Metron (website)

Metron wiki

Metron Jira

Metron Git

Security threats aren’t going to assign themselves unique and immutable IDs. Which means they will be identified by characteristics and associated with particular acts (think associations), which are composed of other subjects, such as particular malware, dates, etc.

Being able to robustly share such identifications (unlike the “we’ve seen this before at some unknown time, with unknown characteristics,” typical of Russian attribution reports) would be a real plus.

Looks like a great opportunity for topic maps-like thinking.

Yes?

3 Reasons to Read: Algorithms to Live By

Filed under: Algorithms,Computer Science,Intelligence — Patrick Durusau @ 7:51 pm

How Algorithms can untangle Human Questions. Interview with Brian Christian by Roberto V. Zican.

The entire interview is worth your study but the first question and answer establish why you should read Algorithms to Live By:

Q1. You have worked with cognitive scientist Tom Griffiths (professor of psy­chol­ogy and cognitive science at UC Berkeley) to show how algorithms used by computers can also untangle very human questions. What are the main lessons learned from such a joint work?

Brian Christian: I think ultimately there are three sets of insights that come out of the exploration of human decision-making from the perspective of computer science.

The first, quite simply, is that identifying the parallels between the problems we face in everyday life and some of the canonical problems of computer science can give us explicit strategies for real-life situations. So-called “explore/exploit” algorithms tell us when to go to our favorite restaurant and when to try something new; caching algorithms suggest — counterintuitively — that the messy pile of papers on your desk may in fact be the optimal structure for that information.

Second is that even in cases where there is no straightforward algorithm or easy answer, computer science offers us both a vocabulary for making sense of the problem, and strategies — using randomness, relaxing constraints — for making headway even when we can’t guarantee we’ll get the right answer every time.

Lastly and most broadly, computer science offers us a radically different picture of rationality than the one we’re used to seeing in, say, behavioral economics, where humans are portrayed as error-prone and irrational. Computer science shows us that being rational means taking the costs of computation — the costs of decision-making itself — into account. This leads to a much more human, and much more achievable picture of rationality: one that includes making mistakes and taking chances.
… (emphasis in original)

After the 2016 U.S. presidential election, I thought the verdict that humans are error-prone and irrational was unassailable.

Looking forward to the use of a human constructed lens (computer science) to view “human questions.” There are answers to “human questions” baked into computer science so watching the authors unpack those will be an interesting read. (Waiting for my copy to arrive.)

Just so you know, the Picador edition is a reprint. It was originally published by William Collins, 21/04/2016 in hardcover, see: Algorithms to Live By, a short review by Roberto Zicari, October 24, 2016.

Scotland Yard Outsources Violation of Your Privacy

Filed under: Cybersecurity,Government,Privacy — Patrick Durusau @ 3:07 pm

Whistleblower uncovers London police hacking of journalists and protestors by Trevor Johnson.

From the post:

The existence of a secretive unit within London’s Metropolitan Police that uses hacking to illegally access the emails of hundreds of political campaigners and journalists has been revealed. At least two of the journalists work for the Guardian.

Green Party representative in the British House of Lords, Jenny Jones, exposed the unit’s existence in an opinion piece in the Guardian. The facts she revealed are based on a letter written to her by a whistleblower.

The letter reveals that through the hacking, Scotland Yard has illegally accessed the email accounts of activists for many years, and this was possible due to help from “counterparts in India.” The letter alleged that the Metropolitan Police had asked police in India to obtain passwords on their behalf—a job that the Indian police subcontracted out to groups of hackers in India.

The Indian hackers sent back the passwords obtained, which were then used illegally by the unit within the Met to gather information from the emails of those targeted.

Trevor covers a number of other points, additional questions that should be asked, the lack of media coverage over this latest outrage, etc., all of which merit your attention.

From my perspective, these abuses by the London Metropolitan Police (Scotland Yard), are examples of the terrorism bogeyman furthering government designs against quarrelsome but otherwise ordinary citizens.

Quarrelsome but otherwise ordinary citizens are far safer and easier to spy upon than seeking out actual wrongdoers. And spying justifies part of Scotland Yard’s budget, since everyone “knows” a lack of actionable intelligence means terrorists are hiding successfully, not the more obvious lack of terrorists to be found.

As described in Trevor’s post, Scotland Yard, like all other creatures of government, thrives in shadows. Shadows where its decisions are beyond discussion and reproach.

In choosing between supporting government spawned creatures that live in the shadows and working to dispel the shadows that foster them, remember they are not, were not and never will be “…on you side.”

They have a side, but it most assuredly is not yours.

Leaking Improves Security – Secrecy Weakens It

Filed under: Cybersecurity,NSA — Patrick Durusau @ 10:04 am

If you need a graphic for the point that leaking improves security – secrecy weakens it, consider this one:

Ask your audience:

Prior to the Shadow Brokers leak of the NSA’s DoublePulsar Malware, how many people were researching a counter to it?

Same question, but substitute: After the Shadow Brokers leak ….

As the headline says: Leaking Improves Security – Secrecy Weakens It.

Image originates from: Over 36,000 Computers Infected with NSA’s DoublePulsar Malware by Catalin Cimpanu.

Anyone who suggests otherwise wants you and others to be insecure.

April 23, 2017

Fraudulent Peer Review – Clue? Responded On Time!

Filed under: Peer Review,Science — Patrick Durusau @ 7:28 pm

107 cancer papers retracted due to peer review fraud by Cathleen O’Grady.

As if peer review weren’t enough of a sham, some authors took it to another level:


It’s possible to fake peer review because authors are often asked to suggest potential reviewers for their own papers. This is done because research subjects are often blindingly niche; a researcher working in a sub-sub-field may be more aware than the journal editor of who is best-placed to assess the work.

But some journals go further and request, or allow, authors to submit the contact details of these potential reviewers. If the editor isn’t aware of the potential for a scam, they then merrily send the requests for review out to fake e-mail addresses, often using the names of actual researchers. And at the other end of the fake e-mail address is someone who’s in on the game and happy to send in a friendly review.

Fake peer reviewers often “know what a review looks like and know enough to make it look plausible,” said Elizabeth Wager, editor of the journal Research Integrity & Peer Review. But they aren’t always good at faking less obvious quirks of academia: “When a lot of the fake peer reviews first came up, one of the reasons the editors spotted them was that the reviewers responded on time,” Wager told Ars. Reviewers almost always have to be chased, so “this was the red flag. And in a few cases, both the reviews would pop up within a few minutes of each other.”

I’m sure timely submission of reviews weren’t the only basis for calling fraud but it is an amusing one.

It’s past time to jettison the bloated machinery of peer review. Judge work by its use, not where it’s published.

Anonymous Domain Registration Service [Update: 24 April 2017]

Filed under: Cybersecurity,Security — Patrick Durusau @ 6:47 pm

Pirate Bay Founder Launches Anonymous Domain Registration Service

Does this sound anonymous to you?


With Njalla, customers don’t buy the domain names themselves, they let the company do it for them. This adds an extra layer of protection but also requires some trust.

A separate agreement grants the customer full usage rights to the domain. This also means that people are free to transfer it elsewhere if they want to.

“Think of us as your friendly drunk (but responsibly so) straw person that takes the blame for your expressions,” Njalla notes.

Njalla

Perhaps I’m being overly suspicious but what is the basis for trusting Njalla?

I would feel better if Njalla only possessed a key that would decrypt (read authenticate) messages as arriving from the owner of some.domain.

Other than payment, what other interest do they have in an owner’s actual identity?

Perhaps I should bump them about that idea.


Update: On further inquiry, registration only requires an email or jabber contact point. You can handle being anonymous to Njalla at those points. So, more anonymous than I thought.

Dissing Facebook’s Reality Hole and Impliedly Censoring Yours

Filed under: Censorship,Facebook,Free Speech,Social Media — Patrick Durusau @ 4:42 pm

Climbing Out Of Facebook’s Reality Hole by Mat Honan.

From the post:

The proliferation of fake news and filter bubbles across the platforms meant to connect us have instead divided us into tribes, skilled in the arts of abuse and harassment. Tools meant for showing the world as it happens have been harnessed to broadcast murders, rapes, suicides, and even torture. Even physics have betrayed us! For the first time in a generation, there is talk that the United States could descend into a nuclear war. And in Silicon Valley, the zeitgeist is one of melancholy, frustration, and even regret — except for Mark Zuckerberg, who appears to be in an absolutely great mood.

The Facebook CEO took the stage at the company’s annual F8 developers conference a little more than an hour after news broke that the so-called Facebook Killer had killed himself. But if you were expecting a somber mood, it wasn’t happening. Instead, he kicked off his keynote with a series of jokes.

It was a stark disconnect with the reality outside, where the story of the hour concerned a man who had used Facebook to publicize a murder, and threaten many more. People used to talk about Steve Jobs and Apple’s reality distortion field. But Facebook, it sometimes feels, exists in a reality hole. The company doesn’t distort reality — but it often seems to lack the ability to recognize it.

I can’t say I’m fond of the Facebook reality hole but unlike Honan:


It can make it harder to use its platforms to harass others, or to spread disinformation, or to glorify acts of violence and destruction.

I have no desire to censor any of the content that anyone cares to make and/or view on it. Bar none.

The “default” reality settings desired by Honan and others are a thumb on the scale for some cause they prefer over others.

Entitled to their preference but I object to their setting the range of preferences enjoyed by others.

You?

April 22, 2017

ARM Releases Machine Readable Architecture Specification (Intel?)

Filed under: Architecture,Cybersecurity,Documentation,Programming,XML — Patrick Durusau @ 9:03 pm

ARM Releases Machine Readable Architecture Specification by Alastair Reid.

From the post:

Today ARM released version 8.2 of the ARM v8-A processor specification in machine readable form. This specification describes almost all of the architecture: instructions, page table walks, taking interrupts, taking synchronous exceptions such as page faults, taking asynchronous exceptions such as bus faults, user mode, system mode, hypervisor mode, secure mode, debug mode. It details all the instruction formats and system register formats. The semantics is written in ARM’s ASL Specification Language so it is all executable and has been tested very thoroughly using the same architecture conformance tests that ARM uses to test its processors (See my paper “Trustworthy Specifications of ARM v8-A and v8-M System Level Architecture”.)

The specification is being released in three sets of XML files:

  • The System Register Specification consists of an XML file for each system register in the architecture. For each register, the XML details all the fields within the register, how to access the register and which privilege levels can access the register.
  • The AArch64 Specification consists of an XML file for each instruction in the 64-bit architecture. For each instruction, there is the encoding diagram for the instruction, ASL code for decoding the instruction, ASL code for executing the instruction and any supporting code needed to execute the instruction and the decode tree for finding the instruction corresponding to a given bit-pattern. This also contains the ASL code for the system architecture: page table walks, exceptions, debug, etc.
  • The AArch32 Specification is similar to the AArch64 specification: it contains encoding diagrams, decode trees, decode/execute ASL code and supporting ASL code.

Alastair provides starting points for use of this material by outlining his prior uses of the same.

Raises the question why an equivalent machine readable data set isn’t available for Intel® 64 and IA-32 Architectures? (PDF manuals)

The data is there, but not in a machine readable format.

Anyone know why Intel doesn’t provide the same convenience?

Journalism Is Skepticism as a Service (SaaS)

Filed under: Journalism,News,Reporting — Patrick Durusau @ 8:33 pm

Image from the Fourth Estate Journalism Association.

I applaud the sentiment and supporting the Fourth Estate is one way to bring it closer to reality.

At the same time, unless and until The New York Times, National Public Radio, and others start reporting US terrorist attacks (bombings) with the same terminology as so-called “terrorists” in their coverage, “Journalism Is Skepticism as a Service (SaaS)” remains an aspiration, not a reality.

Shortfall in Peer Respect and Accomplishment

Filed under: Cybersecurity,Government — Patrick Durusau @ 8:13 pm

I didn’t expect UK government confirmation of my post: Shortfall in Cypbersecurity Talent or Compensation? so quickly!

I argued against the groundless claims of a shortage of cybersecurity talent in the face of escalating cybercrime and hacking statistics.

If there were a shortage of cybersecurity talent, cybercrime should be going down. But it’s not.

The National Crime Agency reports:

The National Crime Agency has today published research into how and why some young people become involved in cyber crime.

The report, which is based on debriefs with offenders and those on the fringes of criminality, explores why young people assessed as unlikely to commit more traditional crimes get involved in cyber crime.

The report emphasises that financial gain is not necessarily a priority for young offenders. Instead, the sense of accomplishment at completing a challenge, and proving oneself to peers in order to increase online reputations are the main motivations for those involved in cyber criminality.

Government agencies, like the FBI for example, are full of lifers who take their breaks at precisely 14:15 PM, have their favorite parking spots, play endless office politics, masters of passive-aggression, who make government and/or corporate work too painful to contemplate for young cybersecurity talent.

In short, a lack of meaningful peer respect and a sense of accomplishment is defeating both government and private hiring of cybersecurity talent.

Read Pathways Into Cyber Crime and evaluate how the potential young hires in there would react to your staff meetings and organizational structure.

That bad? Wow, you are worse off than I thought.

So, are you going to keep with your certificate-driven, cubicle-based, Dilbert-like cybersecurity effort?

How’s that working out for you?

You will have to take risks to find better solutions but you are losing already. Enough to chance a different approach?

April 21, 2017

Shortfall in Cypbersecurity Talent or Compensation?

Filed under: Cybersecurity — Patrick Durusau @ 8:19 pm

Federal effort is needed to address shortfall in cybersecurity talent by Mike McConnell and Judy Genshaft.

If you want to get in on cybersecurity training scam business, there are a number of quotes you can lift from this post. Consider:

Our nation is under attack. Every day, thousands of entities – private enterprises, public institutions and individual citizens—have their computer networks breached, their systems hacked and their data stolen, degraded or destroyed. Such critical infrastructure impacts the cyber-sanctity of our banking system and electric power grid, each vital to our national security. We believe systemically developing more skilled cybersecurity defenders is the essential link needed to protect our nation from ‘bad actors’’ who would exploit our vital systems.

In its latest global survey, the Information Security Certification Consortium (ISC²) projects a cybersecurity talent shortfall of as much as 1.8 million professionals by 2022­­. This shortage in skilled cybersecurity professionals means that all data and digital systems are at risk. Closing the cyber talent gap will require sustained and concerted efforts of government, the private sector, and educational institutions at all levels.

If you don’t already know that hacking increases every year, spend some time at: Hackmageddon. Or with any security report on hacking.

Think about it. How does cybercrime keep increase during a shortfall of cybersecurity talent?

Answer: It doesn’t. Plenty of cybersecurity talent, just a shortfall on one side of the picture.

A “shortfall,” if you want to call it that, caused by low wages and unreasonable working conditions (no weed, even off the job).

All calls for more cybersecurity talent emphasize being on the “right side,” protecting your country, the system, etc., all BS that you can’t put in the bank.

If you want better cybersecurity, have aggressive compensation packages, very flexible working conditions. The talent is out there, it’s just not free. (Nor should it be.)

April 20, 2017

Leak “Threatens Windows Users Around The World?”

Filed under: Cybersecurity,NSA,Security — Patrick Durusau @ 8:58 pm

Leaked NSA Malware Threatens Windows Users Around The World? by Sam Biddle.

Really? Shadow Brokers leaking alleged NSA malware “threatens users around the world?”

Hmmm, I would think that the NSA developing Windows malware is what threatens users around the world.

Yes?

Unlike the apparent industry concealment of vulnerabilities, the leaking of NSA malware puts all users on an equal footing with regard to those vulnerabilities.

In a phrase, users are better off for the NSA malware leak than they were before.

They know (or at least it has been alleged) that these leaked vulnerabilities have been patched in supported Microsoft products. By upgrading to those products, they can avoid these particular pieces of NSA malware.

Leaking vulnerabilities enables users to avoid perils themselves, in this case by upgrading, and/or to demand patches from vendors responsible for the vulnerabilities.

Do you see a downside I don’t?

Well, aside from trashing the market for vulnerabilities and gelding security agencies, neither one of which I will lose any sleep over.

Black Womxn Authors, Library of Congress and MarcXML (Part 2)

Filed under: Library software,Literature,MARC,MARCXML — Patrick Durusau @ 8:40 pm

(After writing this post I got a message from Clifford Anderson on a completely different way to approach the Marc to XML problem. A very neat way. But, I thought the directions on installing MarcEdit on Ubuntu 16.04 would be helpful anyway. More on Clifford’s suggestion to follow.)

If your just joining, read Black Womxn Authors, Library of Congress and MarcXML (Part 1) for the background on why this flurry of installation is at all meaningful!

The goal is to get a working copy of MarcEdit installed on my Ubuntu 16.04 machine.

MarcEdit Linux Installation Instructions reads in part:

Installation Steps:

  1. Download the MarcEdit app bundle. This file has been zipped to reduce the download size. http://marcedit.reeset.net/software/marcedit.bin.zip
  2. Unzip the file and open the MarcEdit folder. Find the Install.txt file and read it.
  3. Ensure that you have the Mono framework installed. What is Mono? Mono is an open source implementation of Microsoft’s .NET framework. The best way to describe it is that .NET is very Java-like; it’s a common runtime that can work across any platform in which the framework has been installed. There are a number of ways to get the Mono framework — for MarcEdit’s purposes, it is recommended that you download and install the official package available from the Mono Project’s website. You can find the Mac OSX download here: http://www.go-mono.com/mono-downloads/download.html
  4. Run MarEdit via the command-line using mono MarcEdit.exe from within the MarcEdit directory.

Well, sort of. 😉

First, you need to go to the Mono Project Download page. From there, under Xamarin packages, follow Debian, Ubuntu, and derivatives.

There is a package for Ubuntu 16.10, but it’s Mono 4.2.1. By installing the Xamarin packages, I am running Mono 4.7.0. Your call but as a matter of habit, I run the latest compatible packages.

Updating your package lists for Debian, Ubuntu, and derivatives:

Add the Mono Project GPG signing key and the package repository to your system (if you don’t use sudo, be sure to switch to root):

sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF

echo "deb http://download.mono-project.com/repo/debian wheezy main" | sudo tee /etc/apt/sources.list.d/mono-xamarin.list

And for Ubuntu 16.10:

echo "deb http://download.mono-project.com/repo/debian wheezy-apache24-compat main" | sudo tee -a /etc/apt/sources.list.d/mono-xamarin.list

Now run:

sudo apt-get update

The Usage section suggests:

The package mono-devel should be installed to compile code.

The package mono-complete should be installed to install everything – this should cover most cases of “assembly not found” errors.

The package referenceassemblies-pcl should be installed for PCL compilation support – this will resolve most cases of “Framework not installed: .NETPortable” errors during software compilation.

The package ca-certificates-mono should be installed to get SSL certificates for HTTPS connections. Install this package if you run into trouble making HTTPS connections.

The package mono-xsp4 should be installed for running ASP.NET applications.

Find and select mono-complete first. Most decent package managers will show dependencies that will be installed. Add any of these that were missed.

Do follow the hints here to verify that Mono is working correctly.

Are We There Yet?

Not quite. It was at this point that I unpacked http://marcedit.reeset.net/software/marcedit.bin.zip and discovered there is no “Install.txt file.” Rather there is a linux_install.txt, which reads:

a) Ensure that the dependencies have been installed
1) Dependency list:
i) MONO 3.4+ (Runtime plus the System.Windows.Forms library [these are sometimes separate])
ii) YAZ 5 + YAZ 5 develop Libraries + YAZ++ ZOOM bindings
iii) ZLIBC libraries
iV) libxml2/libxslt libraries
b) Unzip marcedit.zip
c) On first run:
a) mono MarcEdit.exe
b) Preferences tab will open, click on other, and set the following two values:
i) Temp path: /tmp/
ii) MONO path: [to your full mono path]

** For Z39.50 Support
d) Yaz.Sharp.dll.config — ensure that the dllmap points to the correct version of the shared libyaz object.
e) main_icon.bmp can be used for a desktop icon

Opps! Without unzipping marcedit.zip, you won’t see the dependencies:

ii) YAZ 5 + YAZ 5 develop Libraries + YAZ++ ZOOM bindings
iii) ZLIBC libraries
iV) libxml2/libxslt libraries

The YAZ site has a readme file for Ubuntu, but here is the very abbreviated version:


wget http://ftp.indexdata.dk/debian/indexdata.asc
sudo apt-key add indexdata.asc

echo "deb http://ftp.indexdata.dk/ubuntu xenial main" | sudo tee -a /etc/apt/sources.list
echo "deb-src http://ftp.indexdata.dk/ubuntu xenial main" | sudo tee -a /etc/apt/sources.list

(That sequence only works for Ubuntu xenial. See the readme file for other versions.)

Of course:

sudo apt-get update

As of of today, you are looking for yaz 5.21.0-1 and libyaz5-dev 5.21.0-1.

Check for and/or install ZLIBC and libxml2/libxslt libraries.

Personal taste but I reboot at this point to make sure all the libraries re-load to the correct versions, etc. Should work without rebooting but that’s up to you.

Fire it up with

mono MarcEdit.ext

Choose Locations (not Other) and confirm “Set Temporary Path:” is /tmp/ and MONO Path (the location of mono, try which mono, input the results and select OK.

I did the install on Sunday evening and so after all this, the software on loading announces it has been ungraded! Yes, while I was installing all the dependencies, a new and improved version of MarcEdit was posted.

The XML extraction is a piece of cake so I am working on the XQuery on the resulting MarcXML records for part 3.

Conclusive Reason To NOT Use Gmail

Filed under: Email,Government,Law — Patrick Durusau @ 8:07 pm

Using an email service, Gmail for example, that tracks (and presumably reads) your incoming and outgoing mail is poor security judgement.

Following a California magistrate ruling on 19 April 2017, it’s suicidal.

Shaun Nichols covers the details in Nuh-un, Google, you WILL hand over emails stored on foreign servers, says US judge.

But the only part of the decision that should interest you reads:


The court denies Google’s motion to quash the warrant for content that it stores outside the United States and orders it to produce all content responsive to the search warrant that is retrievable from the United States, regardless of the data’s actual location.

Beeler takes heart from the dissents in In the Matter of a Warrant to Search a Certain E-Mail Account Controlled & Maintained by Microsoft Corp., 829 F.3d 197 (2d Cir. 2016), reh’g denied en banc, No. 14-2985, 2017 WL 362765 (2d Cir. Jan. 24, 2017), to find if data isn’t intentionally stored outside the US, and can be accessed from within the US, then its subject to a warrant under 18 U.S.C. § 2703(a), the Stored Communications Act (“SCA”).

I have a simpler perspective: Do you want to risk fortune and freedom on a how many angels can dance on the head of 18 U.S.C. § 2703(a), the Stored Communications Act (“SCA”) questions?

If your answer is no, don’t use Gmail. Or any other service where data can be accessed from United States for 18 U.S.C. § 2703(a), but similar statutes for other jurisdictions.

For that matter, prudent users restrict themselves to Tor based mail services and always use strong encryption.

Almost any communication can be taken as a crime or step in a conspiracy by a prosecutor inclined to do so.

The only partially safe haven is silence. (Where encryption and/or inability to link you to the encrypted communication = silence.)

Who Prefers Zero Days over 7 Year Old Bugs? + Legalization of Hacking

Filed under: Cybersecurity,Security — Patrick Durusau @ 4:25 pm

“Who” is not clear but Dan Goodin reports in Windows bug used to spread Stuxnet remains world’s most exploited that:

One of the Microsoft Windows vulnerabilities used to spread the Stuxnet worm that targeted Iran remained the most widely exploited software bug in 2015 and 2016 even though the bug was patched years earlier, according to a report published by antivirus provider Kaspersky Lab.

In 2015, 27 percent of Kaspersky users who encountered any sort of exploit were exposed to attacks targeting the critical Windows flaw indexed as CVE-2010-2568. In 2016, the figure dipped to 24.7 percent but still ranked the highest. The code-execution vulnerability is triggered by plugging a booby-trapped USB drive into a vulnerable computer. The second most widespread exploit was designed to gain root access rights to Android phones, with 11 percent in 2015 and 15.6 percent last year.

A market share of almost 25%, despite being patched in 2010, marks CVE-2010-2568 as one of the top bugs a hacker should have in their toolkit.

Not to denigrate finding zero day flaws in vibrators and other IoT devices, or more exotic potential exploits in the Linux kernel but if you approach hacking as an investment, the “best” tools aren’t always the most recent ones. (“Best” defined as the highest return for mastery and use.)

Looking forward to the legalization of hacking, unauthorized penetration of information systems, with civil and criminal penalties for owners of those systems who get hacked.

I suggest that because hacking being illegal, has done nothing to stem the tide of hacking. Mostly because threatening people you can’t find or who think they won’t be found, is by definition, ineffectual.

Making hacking legal and penalizing business interests who get hacked, is a threat against people you can find on a regular basis. They pay taxes, register their stocks, market their products.

Speaking of paying taxes, there could be an OS upgrade tax credit. Something to nudge all the Windows XP, Vista, 7 instances out of existence. That alone would be the largest single improvement in cybersecurity since that because a term.

Legalized, hackers would provide a continuing incentive (fines and penalties) for better software and more consistent upgrade practices. Take advantage of that large pool of unpaid but enthusiastic labor (hackers).

April 19, 2017

Dive Into NLTK – Update – No NLTK Book 2nd Edition

Filed under: Natural Language Processing,NLTK — Patrick Durusau @ 8:46 pm

Dive Into NLTK, Part I: Getting Started with NLTK

From the webpage:

NLTK is the most famous Python Natural Language Processing Toolkit, here I will give a detail tutorial about NLTK. This is the first article in a series where I will write everything about NLTK with Python, especially about text mining and text analysis online.

This is the first article in the series “Dive Into NLTK”, here is an index of all the articles in the series that have been published to date:

Part I: Getting Started with NLTK (this article)
Part II: Sentence Tokenize and Word Tokenize
Part III: Part-Of-Speech Tagging and POS Tagger
Part IV: Stemming and Lemmatization
Part V: Using Stanford Text Analysis Tools in Python
Part VI: Add Stanford Word Segmenter Interface for Python NLTK
Part VII: A Preliminary Study on Text Classification
Part VIII: Using External Maximum Entropy Modeling Libraries for Text Classification
Part IX: From Text Classification to Sentiment Analysis
Part X: Play With Word2Vec Models based on NLTK Corpus

My first post on this series, had only the first seven lessons listed.

There’s another reason for this update.

It appears that no second edition of Natural Language Processing with Python is likely to appear.

Sounds like an opportunity for the NLTK community to continue the work already started.

I don’t have the chops to contribute high quality code but would be willing to work with others on proofing/editing (that’s the part of book production readers rarely see).

Shadow Brokers Compilation Dates

Filed under: CIA,Cybersecurity — Patrick Durusau @ 8:17 pm

ShadowBrokers EquationGroup Compilation Timestamp Observation

From the post:

I looked at the IOCs @GossiTheDog ‏posted, looked each up in virus total and dumped the compilation timestamp into a spreadsheet.

To step back a second, the Microsoft Windows compiler embeds the date and time that the given .exe or .dll was compiled. Compilation time is a very useful characteristic of Portable Executable. Malware authors could zero it or change it to a random value, but I’m not sure there is any indication of that here. If the compilation timestamps are real, then there’s an interesting observation in this dataset.

A very clever observation! Check time stamps for patterns!

Enables an attentive reader to ask:

  1. Where the Shadow Broker exploits stolen prior to 2013-08-22?
  2. If no to #1, where are the exploits post 2013-08-22?

Have dumps so far been far away lightning that precedes very close thunderclaps?

Imagine compilation timestamps in 2014, 2015, or even 2016?

Listen for Shadow Brokers to roar!

Building a Keyword Monitoring Pipeline… (Think Download Before Removal)

Filed under: Intelligence,Open Source Intelligence — Patrick Durusau @ 4:50 pm

Building a Keyword Monitoring Pipeline with Python, Pastebin and Searx by Justin Seitz.

From the post:

Having an early warning system is an incredibly useful tool in the OSINT world. Being able to monitor search engines and other sites for keywords, IP addresses, document names, or email addresses is extremely useful. This can tell you if an adversary, competitor or a friendly ally is talking about you online. In this blog post we are going to setup a keyword monitoring pipeline so that we can monitor both popular search engines and Pastebin for keywords, leaked credentials, or anything else we are interested in.

The pipeline will be designed to alert you whenever one of those keywords is discovered or if you are seeing movement for a keyword on a particular search engine.

Learning of data that was posted but is no longer available, is a sad thing.

Increase your odds of grabbing data before removal by following Justin’s post.

A couple of caveats:

  • I would not use GMail, preferring a Tor mail solution, especially for tracking Pastebin postings.
  • Use and rotate at random VPN connections for your Searx setup.

Going completely dark takes more time and effort than most of us can spare, but you can avoid being like a new car dealership with search lights crossing the sky.

Pure CSS crossword – CSS Grid

Filed under: Crossword Puzzle,Education,XPath,XQuery,XSLT — Patrick Durusau @ 4:22 pm

Pure CSS crossword – CSS Grid by Adrian Roworth.

The UI is slick, although creating the puzzle remains on you.

Certainly suitable for string answers, XQuery/XPath/XSLT expressions, etc.

Enjoy!

April 18, 2017

An Initial Reboot of Oxlos

Filed under: Crowd Sourcing,Greek — Patrick Durusau @ 7:27 pm

An Initial Reboot of Oxlos by James Tauber.

From the post:

In a recent post, Update on LXX Progress, I talked about the possibility of putting together a crowd-sourcing tool to help share the load of clarifying some parse code errors in the CATSS LXX morphological analysis. Last Friday, Patrick Altman and I spent an evening of hacking and built the tool.

Back at BibleTech 2010, I gave a talk about Django, Pinax, and some early ideas for a platform built on them to do collaborative corpus linguistics. Patrick Altman was my main co-developer on some early prototypes and I ended up hiring him to work with me at Eldarion.

The original project was called oxlos after the betacode transcription of the Greek word for “crowd”, a nod to “crowd-sourcing”. Work didn’t continue much past those original prototypes in 2010 and Pinax has come a long way since so, when we decided to work on oxlos again, it made sense to start from scratch. From the initial commit to launching the site took about six hours.

At the moment there is one collective task available—clarifying which of a set of parse codes is valid for a given verb form in the LXX—but as the need for others arises, it will be straightforward to add them (and please contact me if you have similar tasks you’d like added to the site).
… (emphasis in the original)

Crowd sourcing, parse code errors in the CATSS LXX morphological analysis, Patrick Altman and James Tauber! What’s more could you ask for!

Well, assuming you enjoy Django development, https://github.com/jtauber/oxlos2 or have Greek morphology, sign up at: http://oxlos.org/.

After mastering Greek, you don’t really want to lose it from lack of practice. Yes? Perfect opportunity for recent or even not so recent Classics and divinity majors.

I suppose that’s a nice way to say you won’t be encountering LXX Greek on ESPN or CNN. 😉

D3 in Depth – Update

Filed under: D3,Graphics — Patrick Durusau @ 7:05 pm

D3 in Depth by Peter Cook

Peter has added three more chapters since my last visit:

There are another eight (8) to go.

I don’t know about you or Peter, but when people are showing interest in my work, I tend to work more diligently on it.

Drop by, ask questions, make suggestions.

Enjoy!

Older Posts »

Powered by WordPress