Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

January 21, 2018

A “no one saw” It Coming Memory Hack (Schneider Electric)

Filed under: Cybersecurity,Hacking,Security — Patrick Durusau @ 8:13 pm

Schneider Electric: TRITON/TRISIS Attack Used 0-Day Flaw in its Safety Controller System, and a RAT by Kelly Jackson Higgins.

Industrial control systems giant Schneider Electric discovered a zero-day privilege-escalation vulnerability in its Triconex Tricon safety-controller firmware which helped allow sophisticated hackers to wrest control of the emergency shutdown system in a targeted attack on one of its customers.

Researchers at Schneider also found a remote access Trojan (RAT) in the so-called TRITON/TRISIS malware that they say represents the first-ever RAT to infect safety-instrumented systems (SIS) equipment. Industrial sites such as oil and gas and water utilities typically run multiple SISes to independently monitor critical systems to ensure they are operating within acceptable safety thresholds, and when they are not, the SIS automatically shuts them down.

Schneider here today provided the first details of its investigation of the recently revealed TRITON/TRISIS attack that targeted a specific SIS used by one of its industrial customers. Two of the customer’s SIS controllers entered a failed safe mode that shut down the industrial process and ultimately led to the discovery of the malware.

Teams of researchers from Dragos and FireEye’s Mandiant last month each published their own analysis of the malware used in the attack, noting that the smoking gun – a payload that would execute a cyber-physical attack – had not been found.

Perhaps the most amusing part of the post is Schneider’s attribution of near super-human capabilities to the hackers:


Schneider’s controller is based on proprietary hardware that runs on a PowerPC processor. “We run our own proprietary operating system on top of that, and that OS is not known to the public. So the research required to pull this [attack] off was substantial,” including reverse-engineering it, Forney says. “This bears resemblance to a nation-state, someone who was highly financed.”

The attackers also had knowledge of Schneider’s proprietary protocol for Tricon, which also is undocumented publicly, and used it to create their own library for sending commands to interact with Tricon, he says.

Alternatives to a nation-state:

  • 15 year old working with junked Schneider hardware and the Schneider help desk
  • Disgruntled Schneider Electric employee or their children
  • Malware planted to force a quick and insecure patch being pushed out

I discount all the security chest beating by vendors. Their goal: continued use of their products.

Are your Schneider controllers are air-gapped and audited?

Bludgeoning Bootloader Bugs:… (Rebecca “.bx” Shapiro – job hunting)

Filed under: Cybersecurity,Security — Patrick Durusau @ 5:20 pm

Bludgeoning Bootloader Bugs: No write left behind by Rebecca “.bx” Shapiro.

Slides from ShmooCon 2018.

If you are new to bootloading, consider Shapiro’s two blog post on the topic:

A History of Linux Kernel Module Signing

A Toure of Bootloading

both from 2015, and her resources page.

Aside from the slides, her most current work is found at: https://github.com/bx/bootloader_instrumentation_suite.

ShmooCon 2018 just finished earlier today but check for the ShmooCon archives to see a video of Sharpio’s presentation.

I don’t normally post shout-outs for people seeking employment but Shario does impressive work and she is sharing it with the broader community. Unlike some governments and corporations we could all name. Pass her name and details along.

Are You Smarter Than A 15 Year Old?

Filed under: Cybersecurity,Government,Hacking,Politics,Security — Patrick Durusau @ 1:27 pm

15-Year-Old Schoolboy Posed as CIA Chief to Hack Highly Sensitive Information by Mohit Kumar.

From the post:

A notorious pro-Palestinian hacking group behind a series of embarrassing hacks against United States intelligence officials and leaked the personal details of 20,000 FBI agents, 9,000 Department of Homeland Security officers, and some number of DoJ staffers in 2015.

Believe or not, the leader of this hacking group was just 15-years-old when he used “social engineering” to impersonate CIA director and unauthorisedly access highly sensitive information from his Leicestershire home, revealed during a court hearing on Tuesday.

Kane Gamble, now 18-year-old, the British teenager hacker targeted then CIA director John Brennan, Director of National Intelligence James Clapper, Secretary of Homeland Security Jeh Johnson, FBI deputy director Mark Giuliano, as well as other senior FBI figures.

Between June 2015 and February 2016, Gamble posed as Brennan and tricked call centre and helpline staff into giving away broadband and cable passwords, using which the team also gained access to plans for intelligence operations in Afghanistan and Iran.

Gamble said he targeted the US government because he was “getting more and more annoyed about how corrupt and cold-blooded the US Government” was and “decided to do something about it.”

Your questions:

1. Are You Smarter Than A 15 Year Old?

2. Are You Annoyed by a Corrupt and Cold-blooded Government?

3. Have You Decided to do Something about It?

Yeses for #1 and #2 number in the hundreds of millions.

The lack of governments hemorrhaging data worldwide is silent proof that #3 is a very small number.

What’s your answer to #3? (Don’t post it in the comments.)

Collaborative Journalism Projects (Collaboration Opportunities for the Public?)

Filed under: Journalism,News,Reporting — Patrick Durusau @ 11:00 am

Database: Search, sort and learn about collaborative journalism projects from around the world

From the post:

Over the past several months, the Center for Cooperative Media has been collecting, organizing and standardizing information about dozens and dozens of collaborative journalism projects around the world. Our goal was to build a database that could serve as a hub of information about collaborative journalism, something that would be useful to journalists, scholars, media executives, funders and others seeking information on the how such projects work, who’s doing them and what they’re covering.

We worked with Melody Kramer to build the first iteration of the database, which you can find below. It is a work in progress, and you’ll see that it’s still incomplete as we continue to add to it. So far for this soft launch, we’ve input information on 94 news collaborations between more than 800 organizations and 151 people.

But this is just the beginning. We need your help.

Is your project listed? If not, tell us about it. Is the information about your project incorrect? Let us know; email Melody at melodykramer@gmail.com. Are there fields missing you’d like to see us add, or other ways to sort that you think would be useful? Email the Center at info@centerforcooperativemedia.org. We’re using Airtable right now, but are still considering what the best way will be to display the treasure trove of data we’re collecting.

Some notes on navigating the database: First, it’s easier to see the whole picture on desktop than on mobile, although both work well. To see the full record for any particular project, click on the little blue arrow that appears to the left of the project name when you hover over it. You can sort by column as well.

Collaborative journalism is a great way to avoid duplication of effort and to find strength in numbers. This resource is a big step towards encouraging journalist to journalist collaboration.

Opportunities for members of the public to collaborate with journalists?

Suggestions?

January 18, 2018

What Can Reverse Engineering Do For You?

Filed under: Cybersecurity,Reverse Engineering,Security — Patrick Durusau @ 9:18 pm

From the description:

Reverse engineering is a core skill in the information security space, but it doesn’t necessarily get the wide spread exposure that other skills do even though it can help you with your security challenges. We will talk about getting you quickly up and running with a reverse engineering starter pack and explore some interesting x86 assembly code patterns you may encounter in the wild. These patterns are essentially common malware evasion techniques that include packing, analysis evasion, shellcode execution, and crypto usages. It is not always easy recognizing when a technique is used. This talk will begin by defining the each technique as a pattern and then the approaches for reading or bypassing the evasion.

Technical keynote at Shellcon 2017 by Amanda Rousseau (@malwareunicorn).

Even if you’re not interested in reverse engineering, watch the video to see a true master describing their craft.

The “patterns” she speaks of are what I would call “subject identity” in a topic maps context.

TLDR pages (man pages by example)

Filed under: Documentation,Linux OS — Patrick Durusau @ 5:55 pm

TLDR pages

From the webpage:

The TLDR pages are a community effort to simplify the beloved man pages with practical examples.

The TLDR Pages Book (pdf), has 274 pages!

If you have ever hunted through a man page for an example, you will appreciate TLDR pages!

I first saw this in a tweet by Christophe Lalanne.

Launch of DECLASSIFIED

Filed under: Government,Intelligence,Politics — Patrick Durusau @ 11:48 am

Launch of DECLASSIFIED by Mark Curtis.

From the post:

I am about to publish on this site hundreds of UK declassified documents and articles on British foreign policy towards various countries. This will be the first time such a collection has been brought together online.

The declassified documents, mainly from the UK’s National Archives, reveal British policy-makers actual concerns and priorities from the 1940s until the present day, from the ‘horse’s mouth’, as it were: these files are often revelatory and provide an antidote to the often misleading and false mainstream media (and academic) coverage of Britain’s past and present foreign policies.

The documents include my collections of files, accumulated over many years and used as a basis for several books, on episodes such as the UK’s covert war in Yemen in the 1960s, the UK’s support for the Pinochet coup in Chile, the UK’s ‘constitutional coup’ in Guyana, the covert wars in Indonesia in the 1950s, the UK’s backing for wars against the Iraqi Kurds in the 1960s, the coup in Oman in 1970, support for the Idi Amin takeover in Uganda and many others policies since 1945.

But the collection also brings together many other declassified documents by listing dozens of media articles that have been written on the release of declassified files over the years. It also points to some US document releases from the US National Security Archive.

A new resource for those of you tracking the antics of the small and the silly through the 20th and into the 21st century.

I say the “small and the silly” because there’s no doubt that similar machinations have been part and parcel of government toady lives so long as there have been governments. Despite the exaggerated sense of their own importance and the history making importance of their efforts, almost none of their names survive in the ancient historical record.

With the progress of time, the same fate awaits the most recent and current crop of government familiars. While we wait for them to pass into obscurity, you can amuse yourself by outing them and tracking their activities.

This new archive may assist you in your efforts.

Be sure to keep topic maps in mind for mapping between disjoint vocabularies and collections of documents as well as accounts of events.

For Some Definition of “Read” and “Answer” – MS Clickbait

Filed under: Artificial Intelligence,Machine Learning,Microsoft — Patrick Durusau @ 11:37 am

Microsoft creates AI that can read a document and answer questions about it as well as a person by Allison Linn.

From the post:

It’s a major milestone in the push to have search engines such as Bing and intelligent assistants such as Cortana interact with people and provide information in more natural ways, much like people communicate with each other.

A team at Microsoft Research Asia reached the human parity milestone using the Stanford Question Answering Dataset, known among researchers as SQuAD. It’s a machine reading comprehension dataset that is made up of questions about a set of Wikipedia articles.

According to the SQuAD leaderboard, on Jan. 3, Microsoft submitted a model that reached the score of 82.650 on the exact match portion. The human performance on the same set of questions and answers is 82.304. On Jan. 5, researchers with the Chinese e-commerce company Alibaba submitted a score of 82.440, also about the same as a human.

With machine reading comprehension, researchers say computers also would be able to quickly parse through information found in books and documents and provide people with the information they need most in an easily understandable way.

That would let drivers more easily find the answer they need in a dense car manual, saving time and effort in tense or difficult situations.

These tools also could let doctors, lawyers and other experts more quickly get through the drudgery of things like reading through large documents for specific medical findings or rarified legal precedent. The technology would augment their work and leave them with more time to apply the knowledge to focus on treating patients or formulating legal opinions.

Wait, wait! If you read the details about SQuAD, you realize how far Microsoft (or anyone else) is from “…reading through large documents for specific medical findings or rarified legal precedent….”

What is the SQuAD test?

Stanford Question Answering Dataset (SQuAD) is a new reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage. With 100,000+ question-answer pairs on 500+ articles, SQuAD is significantly larger than previous reading comprehension datasets.

Not to take anything away from Microsoft Research Asia or the creators of SQuAD, but “…the answer to every question is a segment of text, or span, from the corresponding reading passage.” is a long way from synthesizing an answer from a long legal document.

The first hurdle is asking a question that can be scored against every “…segment of text, or span…” such that a relevant snippet of text can be found.

The second hurdle is the process of scoring snippets of text in order to retrieve the most useful one. That’s a mechanical process, not one that depends on the semantics of the underlying question or text.

There are other hurdles but those two suffice to show there is no “reading and answering questions” in the same sense we would apply to any human reader.

Click-bait headlines don’t serve the cause of advocating more AI research. On the contrary, a close reading of alleged progress leads to disappointment.

January 16, 2018

Tips for Entering the Penetration Testing Field

Filed under: Cybersecurity,Hacking — Patrick Durusau @ 7:29 pm

Tips for Entering the Penetration Testing Field by Ed Skoudis.

From the post:

It’s an exciting time to be a professional penetration tester. As malicious computer attackers amp up the number and magnitude of their breaches, the information security industry needs an enormous amount of help in proactively finding and resolving vulnerabilities. Penetration testers who are able to identify flaws, understand them, and demonstrate their business impact through careful exploitation are an important piece of the defensive puzzle.

In the courses I teach on penetration testing, I’m frequently asked about how someone can land their first job in the field after they’ve acquired the appropriate technical skills and gained a good understanding of methodologies. Also, over the past decade, I’ve counseled a lot of my friends and acquaintances as they’ve moved into various penetration testing jobs. Although there are many different paths to pen test nirvana, let’s zoom into three of the most promising. It’s worth noting that these three paths aren’t mutually exclusive either. I know many people who started on the first path, jumped to the second mid-way, and later found themselves on path #3. Or, you can jumble them up in arbitrary order.

Career advice and a great listing of resources for any aspiring penetration “tester.”

If you do penetration work for a government, you may be a national hero. If you do commercial penetration testing, not a national hero but not on the run either. If you do non-sanctioned penetration work, life is uncertain. Same skill, same activity. Go figure.

Updated Hacking Challenge Site Links (Signatures as Subject Identifiers)

Filed under: CTF,Cybersecurity,Hacking — Patrick Durusau @ 7:14 pm

Updated Hacking Challenge Site Links

From the post:

These are 70+ sites which offer free challenges for hackers to practice their skills. Some are web-based challenges, some require VPN access to private labs and some are downloadable ISOs and VMs. I’ve tested the links at the time of this posting and they work.

Most of them are at https://www.wechall.net but if I missed a few they will be there.

WeChall is a portal to hacking challenges where you can link your account to all the sites and get ranked. I’ve been a member since 2/2/14.

Internally to the site they have challenges there as well so make sure you check them out!

To find CTFs go to https://www.ctftime.org

On Twitter in the search field type CTF

Google is also your friend.

I’d rephrase “Google is also your friend.” to “Sometimes Google allows you to find ….”

When visiting hacker or CTF (capture the flag) sites, use the same levels of security as any government or other known hostile site.

What is an exploit or vulnerability signature if not a subject identifier?

Data Science Bowl 2018 – Spot Nuclei. Speed Cures.

Filed under: Bioinformatics,Biomedical,Contest,Data Science — Patrick Durusau @ 5:16 pm

Spot Nuclei. Speed Cures.

From the webpage:

The 2018 Data Science Bowl offers our most ambitious mission yet: Create an algorithm to automate nucleus detection and unlock faster cures.

Compete on Kaggle

Three months. $100,000.

Even if you “lose,” think of the experience you will gain. No losers.

Enjoy!

PS: Just thinking outloud but if:


This dataset contains a large number of segmented nuclei images. The images were acquired under a variety of conditions and vary in the cell type, magnification, and imaging modality (brightfield vs. fluorescence). The dataset is designed to challenge an algorithm’s ability to generalize across these variations.

isn’t the ability to generalize, with lower performance a downside?

Why not use the best algorithm for a specified set of data conditions, “merging” that algorithm so to speak, so that scientists always have the best algorithm for their specific data set.

So outside the contest, perhaps recognizing the conditions of the images are the most important subjects and they should be matched to the best conditions for particular algorithms.

Anyone interested in collaborating on a topic map entry?

January 15, 2018

The Art & Science Factory

Filed under: Art,Complexity,Science — Patrick Durusau @ 8:10 pm

The Art & Science Factory

From the about page:


The Art & Science Factory was started in 2008 by Dr. Brian Castellani to organize the various artistic, scientific and educational endeavours he and different collaborators have engaged in to address the growing complexity of global life.

Dr. Castellani is a complexity scientist/artist.

He is internationally recognized for his expertise in complexity science and its history and for his development of the SACS Toolkit, a case-based, mixed-methods, computationally-grounded framework for modeling complex systems. Dr. Castellani’s main area of study is applying complexity science and the SACS Toolkit to various topics in health and healthcare, including community health and medical education.

In terms of visual complexity, Castellani is recognized around the world for his creation of the complexity map, which can be found on Wikipedia and on this website. He is also recognized for his blog on “all things complexity science and art,” the Sociology and Complexity Science Blog.
… (emphasis in original)

Dr. Castellani apparently dislikes searchable text, the about page quote being hand transcribed from an image that is that page.

Unexpectedly, the SACS toolkit, etc. were not hyperlinks so: SACS toolkit, complexity map, and Sociology and Complexity Science Blog, respectively.

2018 Map of the Complexity Sciences

Filed under: Complexity,Visualization — Patrick Durusau @ 5:07 pm

2018 Map of the Complexity Sciences by Brian Castellani.

At full screen this map barely displays on my 22″ monitor so I’m not going to mangle it into something smaller for this post.

The reading instructions read in part:


Also, in order to present some type of organizational structure, the history of the complexity sciences is developed along the field’s five major intellectual traditions: dynamical systems theory (purple), systems science (blue, complex systems theory (yellow, cybernetics (gray) and artificial intelligence (orange. Again, the fit is not exact (and sometimes even somewhat forced); but it is sufficient to help those new to the field gain a sense of its evolving history.

The subject and person nodes are all hyperlinks to additional resources!

Enjoy!

Fun, Frustration, Curiosity, Murderous Rage – mimic

Filed under: Humor,Programming,Unicode — Patrick Durusau @ 10:09 am

mimic

From the webpage:


There are many more characters in the Unicode character set that look, to some extent or another, like others – homoglyphs. Mimic substitutes common ASCII characters for obscure homoglyphs.

Fun games to play with mimic:

  • Pipe some source code through and see if you can find all of the problems
  • Pipe someone else’s source code through without telling them
  • Be fired, and then killed

I can attest to the murderous rage from experience. There was a browser-based SGML parser that would barf on the presence of an extra whitespace (space I think) in the SGML declaration. One file worked, another with the “same” declaration did not.

Only by printing and comparing the files (this was on Windoze machines) was the errant space discovered.

Enjoy!

January 12, 2018

Tactical Advantage: I don’t have to know everything, just more than you.

Filed under: Crowd Sourcing,Mapping,Maps — Patrick Durusau @ 5:09 pm

Mapping the Ghostly Traces of Abandoned Railroads – An interactive, crowdsourced atlas plots vanished transit routes by Jessica Leigh Hester.

From the post:

In the 1830s, a rail line linked Elkton, Maryland, with New Castle, Delaware, shortening the time it took to shuttle people and goods between the Delaware River and Chesapeake Bay. Today you’d never know it had been there. A photograph snapped years after the line had been abandoned captures a stone culvert halfway to collapse into the creek it spanned. Another image, captured even later, shows a relict trail that looks more like a footpath than a railroad right-of-way. The compacted dirt seems wide enough to accommodate no more than two pairs of shoes at a time.

The scar of the New Castle and Frenchtown Railroad barely whispers of the railcars that once barreled through. That’s what earned it a place on Andrew Grigg’s map.

For the past two years, Grigg, a transit enthusiast, has been building an interactive atlas of abandoned railroads. Using Google Maps, he lays the ghostly silhouettes of the lines over modern aerial imagery. His recreation of the 16-mile New Castle and Frenchtown Line crosses state lines and modern highways, marches through suburban housing developments, and passes near a cineplex, a Walmart, and a paintball field.
… (emphasis in original)

Great example of a project capturing travel paths that may be omitted from modern maps. Being omitted from a map doesn’t impact the potential use of an abandoned railway as an alternative to other routes.

Be sure to check ahead of time but digital navigation systems may have omitted discontinued railroads.

The same advantage obtains if you know which underpasses flood after a heavy rain, which streets are impassable, when trains are passing over certain crossings, all manner of information that isn’t captured by standard digital navigation systems.

What information can you add to a map that isn’t known to or thought to be important by others?

Computus manuscripts and where to find them

Filed under: Manuscripts,Maps — Patrick Durusau @ 3:48 pm

Computus manuscripts and where to find them

An interactive map of computus manuscripts by place of preservation.

A poor screen shot:

From the about page:

Welcome to the bèta version of Computus.lat, an online platform for teaching and research in studies of the medieval science of computus. Computus.lat consists of a catalogue of computistical manuscripts and computistical objects, a bibliography, and a number of resources (such as a Mirador-viewer and data visualizations).

Follow @computuslat on Twitter for updates.

Kind regards,
Thom Snijders

Over 500 manuscripts online!

Oh, Computus:

Computus in its simplest definition is the art of ascertaining time by the course of the sun and the moon. This art could be and was a theoretical science, such as that explored by Johannes of Sacrobosco in his De sphera–a science based on arithmetical calculations and astronomical measurements derived from use of the astrolabe or, increasingly by the end of the 13th century, the solar quadrant. In the context of the present exhibit, however, computus is understood mainly as the practical application of these calculations. To reckon time in the broadest sense and to determine the date of Easter became one and the same effort. And for most people, understanding the problem of correct alignment of solar, lunar, yearly and weekly cycles to arrive at the date of Easter was simply reduced to a question of “when?” rather than “why?”. The result was a profusion of calculation formulae, charts and memory devices.

Accompanying these handy mechanisms for determining the date of Easter were many other bits of calendrical information that faith, prejudice and experience leveled to the same degree of acceptance and necessity: the lucky and the unlucky days for travel or for eating goose; the prognostications of rain or wind; the times for bloodletting; the signs of the zodiac; the phases of the moon; the number of hours of sunshine in a given day; the feasts of the saints; the Sundays in a perpetual calendar.

Take heed of the line: “The result was a profusion of calculation formulae, charts and memory devices.” (emphasis added)

And you think we have trouble with daylight savings time and time zones. 😉

Pass this along to manuscript scholars, liturgy buffs, historians, anyone interested in out diverse religious history.

A [Selective] Field Guide to “Fake News” and other Information Disorders

Filed under: Journalism,News,Reporting — Patrick Durusau @ 2:15 pm

New guide helps journalists, researchers investigate misinformation, memes and trolling by Liliana Bounegru and Jonathan Gray.

Recent scandals about the role of social media in key political events in the US, UK and other European countries over the past couple of years have underscored the need to understand the interactions between digital platforms, misleading information and propaganda, and their influence on collective life in democracies.

In response to this, the Public Data Lab and First Draft collaborated last year to develop a free, open-access guide to help students, journalists and researchers investigate misleading and viral content, memes and trolling practices online.

Released today, the five chapters of the guide describe a series of research protocols or “recipes” that can be used to trace trolling practices, the ways false viral news and memes circulate online, and the commercial underpinnings of problematic content. Each recipe provides an accessible overview of the key steps, methods, techniques and datasets used.

The guide will be most useful to digitally savvy and social media literate students, journalists and researchers. However, the recipes range from easy formulae that can be executed without much technical knowledge other than a working understanding of tools such as BuzzSumo and the CrowdTangle browser extension, to ones that draw on more advanced computational techniques. Where possible, we try to offer the recipes in both variants.

Download the guide at the Public Data Lab’s website.

The techniques in the guide are fascinating but the underlying definition of “fake news” is problematic:


The guide explores the notion that fake news is not just another type of content that circulates online, but that it is precisely the character of this online circulation and reception that makes something into fake news. In this sense fake news may be considered not just in terms of the form or content of the message, but also in terms of the mediating infrastructures, platforms and participatory cultures which facilitate its circulation. In this sense, the significance of fake news cannot be fully understood apart from its circulation online. It is the register of this circulation that also enables us to trace how material that starts its life as niche satire can be repackaged as hyper-partisan clickbait to generate advertising money and then continue life as an illustration of dangerous political misinformation.

As a consequence this field guide encourages a shift from focusing on the formal content of fabrications in isolation to understanding the contexts in which they circulate online. This shift points to the limits of a “deficit model” approach – which might imply that fabrications thrive only because of a deficit of factual information. In the guide we suggest new ways of mapping and responding to fake news beyond identifying and fact-checking suspect claims – including “thicker” accounts of circulation as a way to develop a richer understanding of how fake news moves and mobilises people, more nuanced accounts of “fakeness” and responses which are better attuned to the phenomenon.
… (page 8)

The means by which information circulates is always relevant to the study of communications. However, notice that the authors’ definition excludes traditional media from its quest to identify “fake news.” Really? Traditional media isn’t responsible for the circulation of any “fake news?”

Examples of traditional media fails are legion but here is a recent and spectacular one: The U.S. Media Suffered Its Most Humiliating Debacle in Ages and Now Refuses All Transparency Over What Happened by Glenn Greenwald.

Friday was one of the most embarrassing days for the U.S. media in quite a long time. The humiliation orgy was kicked off by CNN, with MSNBC and CBS close behind, and countless pundits, commentators, and operatives joining the party throughout the day. By the end of the day, it was clear that several of the nation’s largest and most influential news outlets had spread an explosive but completely false news story to millions of people, while refusing to provide any explanation of how it happened.

The spectacle began Friday morning at 11 a.m. EST, when the Most Trusted Name in News™ spent 12 straight minutes on air flamboyantly hyping an exclusive bombshell report that seemed to prove that WikiLeaks, last September, had secretly offered the Trump campaign, even Donald Trump himself, special access to the Democratic National Committee emails before they were published on the internet. As CNN sees the world, this would prove collusion between the Trump family and WikiLeaks and, more importantly, between Trump and Russia, since the U.S. intelligence community regards WikiLeaks as an “arm of Russian intelligence,” and therefore, so does the U.S. media.

This entire revelation was based on an email that CNN strongly implied it had exclusively obtained and had in its possession. The email was sent by someone named “Michael J. Erickson” — someone nobody had heard of previously and whom CNN could not identify — to Donald Trump Jr., offering a decryption key and access to DNC emails that WikiLeaks had “uploaded.” The email was a smoking gun, in CNN’s extremely excited mind, because it was dated September 4 — 10 days before WikiLeaks began promoting access to those emails online — and thus proved that the Trump family was being offered special, unique access to the DNC archive: likely by WikiLeaks and the Kremlin.

There was just one small problem with this story: It was fundamentally false, in the most embarrassing way possible. Hours after CNN broadcast its story — and then hyped it over and over and over — the Washington Post reported that CNN got the key fact of the story wrong.

This fundamentally false story does not qualify as “fake news” for this guide. Surprised?

The criteria for “fake news” also excludes questioning statements from members of the intelligence community, which includes James Clapper, a self-confessed and known liar, who continues to be the darling of mainstream media outlets.

Cozy relationships between news organizations and their reporters with government and intelligence sources are also not addressed as potential sources of “fake news.”

Limiting the scope of a “fake news” study in order to have a doable project is understandable. However, excluding factually false stories, use of known liars and corrupting relationships, all because they occur in mainstream media, looks like picking a target to tar with the label “fake news.”

The guides and techniques themselves may be quite useful, so long as you remember they were designed to show social media as the spreader of “fake news.”

One last thing, what the authors don’t offer and I haven’t seen reports of, is the effectiveness of the so-called “fake news” with voters. Taking “Pope Francis Endorses Trump,” as a lie, however widely spread that story became, did it have any impact on the 2016 election? Or did every reader do a double-take and move on? It’s possible to answer that type of question but it does require facts.

Getting Started with Python/CLTK for Historical Languages

Filed under: Classics,Language,Python — Patrick Durusau @ 2:03 pm

Getting Started with Python/CLTK for Historical Languages by Patrick J. Burns.

From the post:

This is a ongoing project to collect online resources for anybody looking to get started with working with Python for historical languages, esp. using the Classical Language Toolkit. If you have suggestions for this lists, email me at patrick[at]diyclassics[dot]org.

What classic or historical language resources would you recommend?

Complete Guide to Topic Modeling (Recommender System for Email Dumps?)

Filed under: Uncategorized — Patrick Durusau @ 1:42 pm

Complete Guide to Topic Modeling with scikit-learn and gensim by George-Bogdan Ivanov.

From the post:

Why is Topic Modeling useful?

There are several scenarios when topic modeling can prove useful. Here are some of them:

  • Text classification – Topic modeling can improve classification by grouping similar words together in topics rather than using each word as a feature
  • Recommender Systems – Using a similarity measure we can build recommender systems. If our system would recommend articles for readers, it will recommend articles with a topic structure similar to the articles the user has already read.
  • Uncovering Themes in Texts – Useful for detecting trends in online publications for example

Would a recommender system be useful for reading email dumps? 😉

Within or across candidates for Congress?

Secrets to Searching for Video Footage (AI Assistance In Your Future?)

Filed under: Artificial Intelligence,Deep Learning,Journalism,News,Reporting,Searching — Patrick Durusau @ 11:24 am

Secrets to Searching for Video Footage by Aric Toler.

From the post:

Much of Bellingcat’s work requires intense research into particular events, which includes finding every possible photograph, video and witness account that will help inform our analysis. Perhaps most notably, we exhaustively researched the events surrounding the shoot down of Malaysian Airlines Flight 17 (MH17) over eastern Ukraine.

The photographs and videos taken near the crash in eastern Ukraine were not particularly difficult to find, as they were widely publicized. However, locating over a dozen photographs and videos of the Russian convoy transporting the Buk anti-aircraft missile launcher that shot down MH17 three weeks before the tragedy was much harder, and required both intense investigation on social networks and some creative thinking.

Most of these videos were shared on Russian-language social networks and YouTube, and did not involve another type of video that is much more important today than it was in 2014 — live streaming. Bellingcat has also made an effort to compile all user-generated videos of the events in Charlottesville on August 12, 2017, providing a database of livestreamed videos on platforms like Periscope, Ustream and Facebook Live, along with footage uploaded after the protest onto platforms like Twitter and YouTube.

Verifying videos is important, as detailed in this Bellingcat guide, but first you have to find them. This guide will provide advice and some tips on how to gather as much video as possible on a particular event, whether it is videos from witnesses of a natural disaster or a terrorist attack. For most examples in this guide, we will assume that the event is a large protest or demonstration, but the same advice is applicable to other events.

I was amused by this description of Snapchat and Instagram:


Snapchat and Instagram are two very common sources for videos, but also two of the most difficult platforms to trawl for clips. Neither has an intuitive search interface that easily allows researchers to sort through and collect videos.

I’m certain that’s true but a trained AI could sort out videos obtained by overly broad requests. As I’m fond of pointing out, not 100% accuracy but you can’t get that with humans either.

Augment your searching with a tireless AI. For best results, add or consult a librarian as well.

PS: I have other concerns at the moment but a subset of the Bellingcat Charlottesville database would make a nice training basis for an AI, which could then be loosed on Instagram and other sources to discover more videos. The usual stumbling block for AI projects being human curated material, which Bellingcat has already supplied.

Leaking Resources for Federal Employees with Ties to ‘Shithole’ Countries

Filed under: Journalism,Leaks,News,Reporting — Patrick Durusau @ 10:58 am

Trump derides protections for immigrants from ‘shithole’ countries by Josh Dawsey.

From the post:

President Trump grew frustrated with lawmakers Thursday in the Oval Office when they discussed protecting immigrants from Haiti, El Salvador and African countries as part of a bipartisan immigration deal, according to several people briefed on the meeting.

“Why are we having all these people from shithole countries come here?” Trump said, according to these people, referring to countries mentioned by the lawmakers.

The EEOC Annual report for 2014 reports out of 2.7 million women and men employed by the federal government:

…63.50% were White, 18.75% were Black or African American 8.50% were Hispanic or Latino, 6.16% were Asian, 1.49% were American Indian or Alaska Native, 1.16% were persons of Two or More Races and 0.45% were Native Hawaiian or Other Pacific Islander…(emphasis added)

In other words, 27.25% of 2.7 million people working for the federal government, or approximately 794,000 federal employees have ties ‘shithole’ countries.

President Trump’s rude remarks are an accurate reflection of current U.S. immigration policy:

The United States treats other countries ‘shitholes’ but it is considered impolite to mention that in public.

Federal employees with ties to ‘shithole’ countries are at least as loyal, if not more so, than your average staffer.

That said, I’m disappointed that media outlets did not immediately call upon federal employees with ties to ‘shithole’ countries to start leaking documents/data.

Here are some places documents can be leaked to:

More generally, see Here’s how to share sensitive leaks with the press and their excellent listing of SecureDrop resources for anonymous submission of documents.

If you have heard of the Panama Papers or the Paradise Papers, then you are thinking about the International Consortium of Investigative Journalists. They do excellent work, but like the other journalists mentioned, are obsessed with being in control of the distribution of your leak.

Every outrage, whether a shooting, unjust imprisonment, racist remarks, religious bigotry, is an opportunity to incite leaking by members of a group.

Not calling for leaking speaks volumes about your commitment to the status quo and its current injustices.

January 11, 2018

The art of writing science

Filed under: Conferences,Science,Writing — Patrick Durusau @ 4:21 pm

The art of writing science by Kevin W. Plaxco

From the post:

The value of writing well should not be underestimated. Imagine, for example, that you hold in your hand two papers, both of which describe precisely the same set of experimental results. One is long, dense, and filled with jargon. The other is concise, engaging, and easy to follow. Which are you more likely to read, understand, and cite? The answer to this question hits directly at the value of good writing: writing well leverages your work. That is, while even the most skillful writing cannot turn bad science into good science, clear and compelling writing makes good science more impactful, and thus more valuable.

The goal of good writing is straightforward: to make your reader’s job as easy as possible. Realizing this goal, though, is not so simple. I, for one, was not a natural-born writer; as a graduate student, my writing was weak and rambling, taking forever to get to the point. But I had the good fortune to postdoc under an outstanding scientific communicator, who taught me the above-described lesson that writing well is worth the considerable effort it demands. Thus inspired, I set out to teach myself how to communicate more effectively, an effort that, some fifteen years later, I am still pursuing.

Along the way I have learned a thing or two that I believe make my papers easier to read, a few of which I am pleased to share with you here. Before I share my hard-won tips, though, I have an admission: there is no single, correct way to write. In fact, there are a myriad of solutions to the problem of writing well (see, e.g., Refs.1–4). The trick, then, is not to copy someone else’s voice, but rather to study what works—and what does not—in your own writing and that of others to formulate your own guide to effective communication. Thus, while I present here some of my most cherished writing conventions (i.e., the rules that I force on my own students), I do not mean to imply that they represent the only acceptable approach. Indeed, you (or your mentor) may disagree strongly with many of the suggestions I make below. This, though, is perfectly fine: my goal is not to convince you that I have found the one true way, but instead simply to get people thinking and talking about writing. I do so in the hope that this will inspire a few more young scientists to develop their own effective styles.

The best way to get the opportunity to do a great presentation for Balisage 2018 is to write a great paper for Balisage 2018. A great paper is step one towards being accepted and having a chance to bask in the admiration of other markup geeks.

OK, so it’s not so much basking as trying to see by star light on a cloudy night.

Still, a great paper will impress the reviewers and if accepted, readers when it appears in the proceedings for this year.

Strong suggestion: Try Plaxco’s first sentence of the paragraph test on your paper (or any you are reviewing). If if fails, start over.

I volunteer to do peer review for Balisage so I’m anticipating some really well-written papers this year.

The David Attenborough Style of Scientific Presentation (Historic First for Balisage?)

Filed under: Communication,Conferences,Presentation — Patrick Durusau @ 4:17 pm

The David Attenborough Style of Scientific Presentation by Will Ratcliff.

From the post:

One of the biggest hurdles to giving a good talk is convincing people that it’s worth their mental energy to listen to you. This approach to speaking is designed to get that buy-in from the audience, without them even realizing they are doing so. The key to this is exploitation of a simple fact: people are curious creatures by nature and will pay attention to a cool story as long as that story remains absolutely clear.

In the D.A. style of speaking, you are the narrator of an interesting story. The goal is to have a visually streamlined talk where the audience is so engaged with your presentation that they forget you’re standing in front of them speaking. Instead, they’re listening to your narrative and seeing the visuals that accompany your story, at no point do they have to stop and try to make sense of what you just said.

A captivating two (2) page summary of the David Attenborough (DA) style for presentations, but at first, since I don’t travel any longer, I wasn’t going to mention it.

On a second or third read, the blindingly obvious hit me:

Rules that work for live conference presentations, also work for video podcasts, lectures, client presentations, anywhere you are seeking to effectively communicate to others. (I guess that rules out White House press briefings.)

Paper submission dates aren’t out yet for Balisage 2018 but your use of DA style for your presentation would be a historic first, so far as I know. 😉

No promises but a video in “normal” style with the same presentation in DA style, for the same presentation, could be an interesting data point.

Introduction to reverse engineering and Assembly (Suicidal Bricking by Ubuntu Servers)

Filed under: Assembly,Cybersecurity,Reverse Engineering,Security — Patrick Durusau @ 4:05 pm

Introduction to reverse engineering and Assembly by Youness Alaoui.

From the post:

Recently, I’ve finished reverse engineering the Intel FSP-S “entry” code, that is from the entry point (FspSiliconInit) all the way to the end of the function and all the subfunctions that it calls. This is only some initial foray into reverse engineering the FSP as a whole, but reverse engineering is something that takes a lot of time and effort. Today’s blog post is here to illustrate that, and to lay the foundations for understanding what I’ve done with the FSP code (in a future blog post).

Over the years, many people asked me to teach them what I do, or to explain to them how to reverse engineer assembly code in general. Sometimes I hear the infamous “How hard can it be?” catchphrase. Last week someone I was discussing with thought that the assembly language is just like a regular programming language, but in binary form—it’s easy to make that mistake if you’ve never seen what assembly is or looks like. Historically, I’ve always said that reverse engineering and ASM is “too complicated to explain” or that “If you need help to get started, then you won’t be able to finish it on your own” and various other vague responses—I often wanted to explain to others why I said things like that but I never found a way to do it. You see, when something is complex, it’s easy to say that it’s complex, but it’s much harder to explain to people why it’s complex.

I was lucky to recently stumble onto a little function while reverse engineering the Intel FSP, a function that was both simple and complex, where figuring out what it does was an interesting challenge that I can easily walk you through. This function wasn’t a difficult thing to understand, and by far, it’s not one of the hard or complex things to reverse engineer, but this one is “small and complex enough” that it’s a perfect example to explain, without writing an entire book or getting into the more complex aspects of reverse engineering. So today’s post serves as a “primer” guide to reverse engineering for all of those interested in the subject. It is a required read in order to understand the next blog posts I would be writing about the Intel FSP. Ready? Strap on your geek helmet and let’s get started!
… (emphasis in original)

Intel? Intel? I heard something recently about Intel chips. You? 😉

No, this won’t help you specifically with Spectre and Meltdown, but it’s a step in the direction of building such skills.

The Project Zero team at Google did not begin life with the skills necessary to discover Spectre and Meltdown.

It took 20 years for those vulnerabilities to be discovered.

What vulnerabilities await discovery by you?

PS: Word on the street is that Ubuntu 16.04 servers are committing suicide rather than run more slowly with patches for Meltdown and Spectre. Meltdown and Spectre Patches Bricking Ubuntu 16.04 Computers. The attribution of intention to Ubuntu servers may be a bit overdone but the bricking part is true.

W. E. B. Du Bois as Data Scientist

Filed under: Data Science,Social Sciences,Socioeconomic Data,Visualization — Patrick Durusau @ 3:51 pm

W. E. B. Du Bois’s Modernist Data Visualizations of Black Life by Allison Meier.

From the post:

For the 1900 Exposition Universelle in Paris, African American activist and sociologist W. E. B. Du Bois led the creation of over 60 charts, graphs, and maps that visualized data on the state of black life. The hand-drawn illustrations were part of an “Exhibit of American Negroes,” which Du Bois, in collaboration with Thomas J. Calloway and Booker T. Washington, organized to represent black contributions to the United States at the world’s fair.

This was less than half a century after the end of American slavery, and at a time when human zoos displaying people from colonized countries in replicas of their homes were still common at fairs (the ruins of one from the 1907 colonial exhibition in Paris remain in the Bois de Vincennes). Du Bois’s charts (recently shared by data artist Josh Begley on Twitter) focus on Georgia, tracing the routes of the slave trade to the Southern state, the value of black-owned property between 1875 and 1889, comparing occupations practiced by blacks and whites, and calculating the number of black students in different school courses (2 in business, 2,252 in industrial).

Ellen Terrell, a business reference specialist at the Library of Congress, wrote a blog post in which she cites a report by Calloway that laid out the 1900 exhibit’s goals:

It was decided in advance to try to show ten things concerning the negroes in America since their emancipation: (1) Something of the negro’s history; (2) education of the race; (3) effects of education upon illiteracy; (4) effects of education upon occupation; (5) effects of education upon property; (6) the negro’s mental development as shown by the books, high class pamphlets, newspapers, and other periodicals written or edited by members of the race; (7) his mechanical genius as shown by patents granted to American negroes; (8) business and industrial development in general; (9) what the negro is doing for himself though his own separate church organizations, particularly in the work of education; (10) a general sociological study of the racial conditions in the United States.

Georgia was selected to represent these 10 points because, according to Calloway, “it has the largest negro population and because it is a leader in Southern sentiment.” Rebecca Onion on Slate Vault notes that Du Bois created the charts in collaboration with his students at Atlanta University, examining everything from the value of household and kitchen furniture to the “rise of the negroes from slavery to freedom in one generation.”

The post is replete with images created by Du Bois for the exposition, of which this is an example:

As we all know, but rarely say in public, data science and visualization of data isn’t a new discipline.

The data science/visualization by Du Bois merits notice during Black History month (February) but the rest of the year as well. It’s part of our legacy in data science and we should be proud of it.

The Watchdog Press As Lapdog Press

Filed under: Journalism,Law,News,Reporting — Patrick Durusau @ 3:42 pm

When Intelligence Agencies Make Backroom Deals With the Media, Democracy Loses by Bill Blunden.

From the post:

Steven Spielberg’s new movie The Post presents the story behind Katharine Graham’s decision to publish the Pentagon Papers in The Washington Post. As the closing credits roll, one is left with the impression of a publisher who adopts an adversarial stance towards powerful government officials. Despite the director’s $50 million budget (or, perhaps, because of it), there are crucial details that are swept under the rug — details that might lead viewers towards a more accurate understanding of the relationship between the mainstream corporate press and the government.

The public record offers some clarity. Three years after Graham decided to go public with the Pentagon Papers, Seymour Hersh revealed a Central Intelligence Agency (CIA) program called Operation CHAOS in The New York Times. Hersh cited inside sources who described “a massive, illegal domestic intelligence operation during the Nixon Administration against the antiwar movement and other dissident groups in the United States.” Hersh’s article on CIA domestic operations is pertinent because, along with earlier revelations by Christopher Pyle, it prompted the formation of the Church Commission.

The Church Commission was chartered to examine abuses by United States intelligence agencies. In 1976, the commission’s final report (page 455 of Book I, entitled “Foreign and Military Intelligence”) found that the CIA maintained “a network of several hundred foreign individuals around the world who provide intelligence for the CIA and at times attempt to influence opinion through the use of covert propaganda” and that “approximately 50 of the [Agency] assets are individual American journalists or employees of US media organizations.”

These initial findings were further corroborated by Carl Bernstein, who unearthed a web of “more than 400 American journalists who in the past twenty‑five years have secretly carried out assignments for the Central Intelligence Agency.” Note that Bernstein was one of the Washington Post journalists who helped to expose the Watergate scandal. He published his piece on the CIA and the media with Rolling Stone magazine in 1977.

Show of hands. How many of you think the CIA, which freely violates surveillance and other laws, has not continued to suborn journalists, up to and including now?

Despite a recent assurance from someone whose opinion I value, journalists operating on a shoe-string have no corner on the public interest. Nor is that a guarantee they don’t have their own agendas.

Money is just one source of corruption. Access to classified information, pretige in the profession, deciding whose newsworthy and who is not, power over other reporters, are all factors that don’t operate in the public interest.

My presumption about undisclosed data in the possession of reporters accords with the State of Georgia, 24-4-22. Presumption from failure to produce evidence:

If a party has evidence in his power and within his reach by which he may repel a claim or charge against him but omits to produce it, or if he has more certain and satisfactory evidence in his power but relies on that which is of a weaker and inferior nature, a presumption arises that the charge or claim against him is well founded; but this presumption may be rebutted.

In short, evidence you don’t reveal is presumed to be against you.

That has worked for centuries in courts, why would I apply a different standard to reporters (or government officials)?

Fact Forward: Fact Free Assault on Online Misinformation

Filed under: Fake News,Journalism,News,Reporting — Patrick Durusau @ 3:00 pm

Fact Forward: If you had $50,000, how would you change fact-checking?

From the post:

The International Fact-Checking Network wants to support your next big idea.

We recognize the importance of making innovation a key part of fact-checking in the age of online misinformation and we are also aware that innovation requires investment. For those reasons, we are opening Fact Forward. A call for fact-checking organizations and/or teams of journalists, designers, developers or data scientists to submit projects that can represent a paradigmatic innovation for fact-checkers in any of these areas: 1) formats, 2) business models 3) technology-assisted fact-checking.

With Fact Forward, the IFCN will grant 50,000 USD to the winning project.

For this fund, an innovative project is defined as one that provides a distinct, novel user experience that seamlessly integrates content, design, and business strategy. The innovation should serve both the audience and the organization.

The vague definition of “innovative project” leaves the impression the judges have no expertise with software development. A quick check of the judges credentials reveals that is indeed the case. Be forewarned, fluffy pro-fact checking phrases are likely to outweigh any technical merit in your proposals.

If you doubt this is an ideological project, consider the implied premises of “…the age of online misinformation….” Conceding that online misinformation does exist, those include:

1. Online misinformation influences voters:

What evidence does exist, is reported by Hunt Allcott, Matthew Gentzkow in Social Media and Fake News in the 2016 Election, the astract reads:

Following the 2016 U.S. presidential election, many have expressed concern about the effects of false stories (“fake news”), circulated largely through social media. We discuss the economics of fake news and present new data on its consumption prior to the election. Drawing on web browsing data, archives of fact-checking websites, and results from a new online survey, we find: (i) social media was an important but not dominant source of election news, with 14 percent of Americans calling social media their “most important” source; (ii) of the known false news stories that appeared in the three months before the election, those favoring Trump were shared a total of 30 million times on Facebook, while those favoring Clinton were shared 8 million times; (iii) the average American adult saw on the order of one or perhaps several fake news stories in the months around the election, with just over half of those who recalled seeing them believing them; and (iv) people are much more likely to believe stories that favor their preferred candidate, especially if they have ideologically segregated social media networks.

Or as summarized in Don’t blame the election on fake news. Blame it on the media by Duncan J. Watts and David M. Rothschild:


In addition, given what is known about the impact of online information on opinions, even the high-end estimates of fake news penetration would be unlikely to have had a meaningful impact on voter behavior. For example, a recent study by two economists, Hunt Allcott and Matthew Gentzkow, estimates that “the average US adult read and remembered on the order of one or perhaps several fake news articles during the election period, with higher exposure to pro-Trump articles than pro-Clinton articles.” In turn, they estimate that “if one fake news article were about as persuasive as one TV campaign ad, the fake news in our database would have changed vote shares by an amount on the order of hundredths of a percentage point.” As the authors acknowledge, fake news stories could have been more influential than this back-of-the-envelope calculation suggests for a number of reasons (e.g., they only considered a subset of all such stories; the fake stories may have been concentrated on specific segments of the population, who in turn could have had a disproportionate impact on the election outcome; fake news stories could have exerted more influence over readers’ opinions than campaign ads). Nevertheless, their influence would have had to be much larger—roughly 30 times as large—to account for Trump’s margin of victory in the key states on which the election outcome depended.

Just as one example, online advertising is routinely studied, Understanding Interactive Online Advertising: Congruence and Product Involvement in Highly and Lowly Arousing, Skippable Video Ads by Daniel Belanche, Carlos Flavián, Alfredo Pérez-Rueda. But the IFCN offers no similar studies for what it construes as “…online misinformation….”

Without some evidence for and measurement of the impact of “…online misinformation…,” what is the criteria for success for your project?

2. Correcting online misinformation influences voters:

The second, even more problematic assumption in this project is that correcting online misinformation influences voters.

Facts, even “correct” facts do a poor job of changing opinions. Even the lay literature is legion on this point: Facts Don’t Change People’s Minds. Here’s What Does; Why Facts Don’t Change Our Minds; The Backfire Effect: Why Facts Don’t Win Arguments; In the battle to change people’s minds, desires come before facts; The post-fact era.

Any studies to the contrary? Surely the IFCN has some evidence that correcting misinformation changes opinions or influences voter behavior?

(I reserve this space for any studies supplied by the IFCN or others to support that premise.)

I don’t disagree with fact checking per se. Readers should be able to rely upon representations of fact. But Glenn Greenwald’s The U.S. Media Suffered Its Most Humiliating Debacle in Ages and Now Refuses All Transparency Over What Happened makes it clear that misinformation isn’t limited to appearing online.

One practical suggestion: If $50,000 is enough for your participation in an ideological project, use sentiment analysis to identify pro-Trump materials. Anything “pro-Trump” is, for some funders, “misinformation.”

PS: I didn’t vote for Trump and loathe his administration. However, pursuing fantasies to explain his victory in 2016 won’t prevent a repeat of same in 2020. Whether he is defeated with misinformation or correct information makes no difference to me. His defeat is the only priority.

Practical projects with a defeat of Trump in 2020 goal are always of interest. Ping me.

January 10, 2018

Tails With Meltdown and Spectre Fixes w/ Caveats

Filed under: Cybersecurity,Security,Tails — Patrick Durusau @ 4:59 pm

Tails 3.4 is out

From the post:


In particular, Tails 3.4 fixes the widely reported Meltdown attack, and includes the partial mitigation for Spectre.

Timely security patches are always good news.

Three caveats:

1. Meltdown and Spectre patches originate in the same community that missed these vulnerabilities for twenty-odd years. How confident are you in these patches?

2. Meltdown and Spectre are more evidence for the existence of other fundamental design flaws than we have for life on other planets.

3. When did the NSA become aware of Meltdown and Spectre?

eXist-db – First Upgrade for 2018

Filed under: eXist,XML,XML Database,XQuery — Patrick Durusau @ 2:06 pm

I usually update from notices of a new version and so rarely visit the eXist-db homepage. My loss.

There’s a cool homepage image. With links to documentation, community, references, but not overwhelmingly so.

Kudos! Oh, the upgrade:

eXist-db v3.6.1 – January 03, 2018

From the release notes:

eXist-db v3.6.1 has just been released. This is a hotfix release, which contains bug fixes for several important issues discovered since eXist-db v3.6.0.

We recommend that all users of eXist 3.6.0 should upgrade to eXist 3.6.1.

Bug fixes

  • Fixed issue where the package manager wrote non-well-formed XML that caused problems during backup/restore. #1620
  • Fixed namespace prefix for attributes and namespace nodes.
  • Made sure the localName of a in memory element is correctly obtained under various namespace declaration conditions
  • Fix for NPE in org.exist.xquery.functions.fn.FunId #1642
  • Several atomic comparisons raise wrong error code #1638
  • General comparison to empty sequence sometimes raises an error #1639
  • Warn if no <target> is found in an EXPath packages’s repo.xml

Backwards Compatibility

  • eXist-db v3.6.1 is backwards binary-compatible as far as v3.0, but not with earlier versions. Users upgrading from previous versions should perform a full backup and restore to migrate their data.

Downloading This Version

eXist-db v3.6.1 is available for download from Bintray. Maven artifacts for eXist-db v3.6.1 are available from our mvn-repo. Mac users of the Homebrew package repository may acquire eXist 3.6.1 directly from there.

Downloading This Version

eXist-db v3.6.1 is available for download from Bintray. Maven artifacts for eXist-db v3.6.1 are available from our mvn-repo. Mac users of the Homebrew package repository may acquire eXist 3.6.1 directly from there.

When 2018 congressional candidate (U.S.) inboxes start dropping, will eXist-db be your tool of choice?

Enjoy!

Women of Islamic Studies

Filed under: Islam — Patrick Durusau @ 1:53 pm

Women of Islamic Studies by Dr. Kristian Petersen.

From the webpage:

Women of Islamic Studies is a crowdsourced database of women scholars who work on Muslims and Islam. This ongoing project is in its beta version. Once sufficient data has been collected I will partner with a university for a more stable home.

Women of Islamic Studies is intended to contest the prevalence of all-male and male dominated academic domains, such as editorial boards, conference panels, publications, guest speakers, bibliographies, books reviews, etc. and provide resources to support the recognition, citation, and inclusion of women scholars in the field of Islamic Studies. Anyone who identifies as a woman, gender non-conforming, or non-binary is welcomed on the list. The scholars listed come from a wide variety of disciplines and perspectives. “Islamic Studies” is meant to be as inclusive as possible, meaning anyone whose expertise is related to the understanding of Muslims and the Islamic tradition, and intended to demarcate a disciplinary boundary. Please feel free to list any relevant scholars who work on Islam and Muslims in any capacity. The crowdsourced contents are made possible by many contributors. Please add to our list and help spread the word.

I have contacted my graduate school Arabic professor to ask if she wants to join this list.

Who are you going to ask to join? Failing that, spread the word!

« Newer PostsOlder Posts »

Powered by WordPress