Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

January 15, 2018

The Art & Science Factory

Filed under: Art,Complexity,Science — Patrick Durusau @ 8:10 pm

The Art & Science Factory

From the about page:


The Art & Science Factory was started in 2008 by Dr. Brian Castellani to organize the various artistic, scientific and educational endeavours he and different collaborators have engaged in to address the growing complexity of global life.

Dr. Castellani is a complexity scientist/artist.

He is internationally recognized for his expertise in complexity science and its history and for his development of the SACS Toolkit, a case-based, mixed-methods, computationally-grounded framework for modeling complex systems. Dr. Castellani’s main area of study is applying complexity science and the SACS Toolkit to various topics in health and healthcare, including community health and medical education.

In terms of visual complexity, Castellani is recognized around the world for his creation of the complexity map, which can be found on Wikipedia and on this website. He is also recognized for his blog on “all things complexity science and art,” the Sociology and Complexity Science Blog.
… (emphasis in original)

Dr. Castellani apparently dislikes searchable text, the about page quote being hand transcribed from an image that is that page.

Unexpectedly, the SACS toolkit, etc. were not hyperlinks so: SACS toolkit, complexity map, and Sociology and Complexity Science Blog, respectively.

2018 Map of the Complexity Sciences

Filed under: Complexity,Visualization — Patrick Durusau @ 5:07 pm

2018 Map of the Complexity Sciences by Brian Castellani.

At full screen this map barely displays on my 22″ monitor so I’m not going to mangle it into something smaller for this post.

The reading instructions read in part:


Also, in order to present some type of organizational structure, the history of the complexity sciences is developed along the field’s five major intellectual traditions: dynamical systems theory (purple), systems science (blue, complex systems theory (yellow, cybernetics (gray) and artificial intelligence (orange. Again, the fit is not exact (and sometimes even somewhat forced); but it is sufficient to help those new to the field gain a sense of its evolving history.

The subject and person nodes are all hyperlinks to additional resources!

Enjoy!

Fun, Frustration, Curiosity, Murderous Rage – mimic

Filed under: Humor,Programming,Unicode — Patrick Durusau @ 10:09 am

mimic

From the webpage:


There are many more characters in the Unicode character set that look, to some extent or another, like others – homoglyphs. Mimic substitutes common ASCII characters for obscure homoglyphs.

Fun games to play with mimic:

  • Pipe some source code through and see if you can find all of the problems
  • Pipe someone else’s source code through without telling them
  • Be fired, and then killed

I can attest to the murderous rage from experience. There was a browser-based SGML parser that would barf on the presence of an extra whitespace (space I think) in the SGML declaration. One file worked, another with the “same” declaration did not.

Only by printing and comparing the files (this was on Windoze machines) was the errant space discovered.

Enjoy!

January 12, 2018

Tactical Advantage: I don’t have to know everything, just more than you.

Filed under: Crowd Sourcing,Mapping,Maps — Patrick Durusau @ 5:09 pm

Mapping the Ghostly Traces of Abandoned Railroads – An interactive, crowdsourced atlas plots vanished transit routes by Jessica Leigh Hester.

From the post:

In the 1830s, a rail line linked Elkton, Maryland, with New Castle, Delaware, shortening the time it took to shuttle people and goods between the Delaware River and Chesapeake Bay. Today you’d never know it had been there. A photograph snapped years after the line had been abandoned captures a stone culvert halfway to collapse into the creek it spanned. Another image, captured even later, shows a relict trail that looks more like a footpath than a railroad right-of-way. The compacted dirt seems wide enough to accommodate no more than two pairs of shoes at a time.

The scar of the New Castle and Frenchtown Railroad barely whispers of the railcars that once barreled through. That’s what earned it a place on Andrew Grigg’s map.

For the past two years, Grigg, a transit enthusiast, has been building an interactive atlas of abandoned railroads. Using Google Maps, he lays the ghostly silhouettes of the lines over modern aerial imagery. His recreation of the 16-mile New Castle and Frenchtown Line crosses state lines and modern highways, marches through suburban housing developments, and passes near a cineplex, a Walmart, and a paintball field.
… (emphasis in original)

Great example of a project capturing travel paths that may be omitted from modern maps. Being omitted from a map doesn’t impact the potential use of an abandoned railway as an alternative to other routes.

Be sure to check ahead of time but digital navigation systems may have omitted discontinued railroads.

The same advantage obtains if you know which underpasses flood after a heavy rain, which streets are impassable, when trains are passing over certain crossings, all manner of information that isn’t captured by standard digital navigation systems.

What information can you add to a map that isn’t known to or thought to be important by others?

Computus manuscripts and where to find them

Filed under: Manuscripts,Maps — Patrick Durusau @ 3:48 pm

Computus manuscripts and where to find them

An interactive map of computus manuscripts by place of preservation.

A poor screen shot:

From the about page:

Welcome to the bèta version of Computus.lat, an online platform for teaching and research in studies of the medieval science of computus. Computus.lat consists of a catalogue of computistical manuscripts and computistical objects, a bibliography, and a number of resources (such as a Mirador-viewer and data visualizations).

Follow @computuslat on Twitter for updates.

Kind regards,
Thom Snijders

Over 500 manuscripts online!

Oh, Computus:

Computus in its simplest definition is the art of ascertaining time by the course of the sun and the moon. This art could be and was a theoretical science, such as that explored by Johannes of Sacrobosco in his De sphera–a science based on arithmetical calculations and astronomical measurements derived from use of the astrolabe or, increasingly by the end of the 13th century, the solar quadrant. In the context of the present exhibit, however, computus is understood mainly as the practical application of these calculations. To reckon time in the broadest sense and to determine the date of Easter became one and the same effort. And for most people, understanding the problem of correct alignment of solar, lunar, yearly and weekly cycles to arrive at the date of Easter was simply reduced to a question of “when?” rather than “why?”. The result was a profusion of calculation formulae, charts and memory devices.

Accompanying these handy mechanisms for determining the date of Easter were many other bits of calendrical information that faith, prejudice and experience leveled to the same degree of acceptance and necessity: the lucky and the unlucky days for travel or for eating goose; the prognostications of rain or wind; the times for bloodletting; the signs of the zodiac; the phases of the moon; the number of hours of sunshine in a given day; the feasts of the saints; the Sundays in a perpetual calendar.

Take heed of the line: “The result was a profusion of calculation formulae, charts and memory devices.” (emphasis added)

And you think we have trouble with daylight savings time and time zones. 😉

Pass this along to manuscript scholars, liturgy buffs, historians, anyone interested in out diverse religious history.

A [Selective] Field Guide to “Fake News” and other Information Disorders

Filed under: Journalism,News,Reporting — Patrick Durusau @ 2:15 pm

New guide helps journalists, researchers investigate misinformation, memes and trolling by Liliana Bounegru and Jonathan Gray.

Recent scandals about the role of social media in key political events in the US, UK and other European countries over the past couple of years have underscored the need to understand the interactions between digital platforms, misleading information and propaganda, and their influence on collective life in democracies.

In response to this, the Public Data Lab and First Draft collaborated last year to develop a free, open-access guide to help students, journalists and researchers investigate misleading and viral content, memes and trolling practices online.

Released today, the five chapters of the guide describe a series of research protocols or “recipes” that can be used to trace trolling practices, the ways false viral news and memes circulate online, and the commercial underpinnings of problematic content. Each recipe provides an accessible overview of the key steps, methods, techniques and datasets used.

The guide will be most useful to digitally savvy and social media literate students, journalists and researchers. However, the recipes range from easy formulae that can be executed without much technical knowledge other than a working understanding of tools such as BuzzSumo and the CrowdTangle browser extension, to ones that draw on more advanced computational techniques. Where possible, we try to offer the recipes in both variants.

Download the guide at the Public Data Lab’s website.

The techniques in the guide are fascinating but the underlying definition of “fake news” is problematic:


The guide explores the notion that fake news is not just another type of content that circulates online, but that it is precisely the character of this online circulation and reception that makes something into fake news. In this sense fake news may be considered not just in terms of the form or content of the message, but also in terms of the mediating infrastructures, platforms and participatory cultures which facilitate its circulation. In this sense, the significance of fake news cannot be fully understood apart from its circulation online. It is the register of this circulation that also enables us to trace how material that starts its life as niche satire can be repackaged as hyper-partisan clickbait to generate advertising money and then continue life as an illustration of dangerous political misinformation.

As a consequence this field guide encourages a shift from focusing on the formal content of fabrications in isolation to understanding the contexts in which they circulate online. This shift points to the limits of a “deficit model” approach – which might imply that fabrications thrive only because of a deficit of factual information. In the guide we suggest new ways of mapping and responding to fake news beyond identifying and fact-checking suspect claims – including “thicker” accounts of circulation as a way to develop a richer understanding of how fake news moves and mobilises people, more nuanced accounts of “fakeness” and responses which are better attuned to the phenomenon.
… (page 8)

The means by which information circulates is always relevant to the study of communications. However, notice that the authors’ definition excludes traditional media from its quest to identify “fake news.” Really? Traditional media isn’t responsible for the circulation of any “fake news?”

Examples of traditional media fails are legion but here is a recent and spectacular one: The U.S. Media Suffered Its Most Humiliating Debacle in Ages and Now Refuses All Transparency Over What Happened by Glenn Greenwald.

Friday was one of the most embarrassing days for the U.S. media in quite a long time. The humiliation orgy was kicked off by CNN, with MSNBC and CBS close behind, and countless pundits, commentators, and operatives joining the party throughout the day. By the end of the day, it was clear that several of the nation’s largest and most influential news outlets had spread an explosive but completely false news story to millions of people, while refusing to provide any explanation of how it happened.

The spectacle began Friday morning at 11 a.m. EST, when the Most Trusted Name in News™ spent 12 straight minutes on air flamboyantly hyping an exclusive bombshell report that seemed to prove that WikiLeaks, last September, had secretly offered the Trump campaign, even Donald Trump himself, special access to the Democratic National Committee emails before they were published on the internet. As CNN sees the world, this would prove collusion between the Trump family and WikiLeaks and, more importantly, between Trump and Russia, since the U.S. intelligence community regards WikiLeaks as an “arm of Russian intelligence,” and therefore, so does the U.S. media.

This entire revelation was based on an email that CNN strongly implied it had exclusively obtained and had in its possession. The email was sent by someone named “Michael J. Erickson” — someone nobody had heard of previously and whom CNN could not identify — to Donald Trump Jr., offering a decryption key and access to DNC emails that WikiLeaks had “uploaded.” The email was a smoking gun, in CNN’s extremely excited mind, because it was dated September 4 — 10 days before WikiLeaks began promoting access to those emails online — and thus proved that the Trump family was being offered special, unique access to the DNC archive: likely by WikiLeaks and the Kremlin.

There was just one small problem with this story: It was fundamentally false, in the most embarrassing way possible. Hours after CNN broadcast its story — and then hyped it over and over and over — the Washington Post reported that CNN got the key fact of the story wrong.

This fundamentally false story does not qualify as “fake news” for this guide. Surprised?

The criteria for “fake news” also excludes questioning statements from members of the intelligence community, which includes James Clapper, a self-confessed and known liar, who continues to be the darling of mainstream media outlets.

Cozy relationships between news organizations and their reporters with government and intelligence sources are also not addressed as potential sources of “fake news.”

Limiting the scope of a “fake news” study in order to have a doable project is understandable. However, excluding factually false stories, use of known liars and corrupting relationships, all because they occur in mainstream media, looks like picking a target to tar with the label “fake news.”

The guides and techniques themselves may be quite useful, so long as you remember they were designed to show social media as the spreader of “fake news.”

One last thing, what the authors don’t offer and I haven’t seen reports of, is the effectiveness of the so-called “fake news” with voters. Taking “Pope Francis Endorses Trump,” as a lie, however widely spread that story became, did it have any impact on the 2016 election? Or did every reader do a double-take and move on? It’s possible to answer that type of question but it does require facts.

Getting Started with Python/CLTK for Historical Languages

Filed under: Classics,Language,Python — Patrick Durusau @ 2:03 pm

Getting Started with Python/CLTK for Historical Languages by Patrick J. Burns.

From the post:

This is a ongoing project to collect online resources for anybody looking to get started with working with Python for historical languages, esp. using the Classical Language Toolkit. If you have suggestions for this lists, email me at patrick[at]diyclassics[dot]org.

What classic or historical language resources would you recommend?

Complete Guide to Topic Modeling (Recommender System for Email Dumps?)

Filed under: Uncategorized — Patrick Durusau @ 1:42 pm

Complete Guide to Topic Modeling with scikit-learn and gensim by George-Bogdan Ivanov.

From the post:

Why is Topic Modeling useful?

There are several scenarios when topic modeling can prove useful. Here are some of them:

  • Text classification – Topic modeling can improve classification by grouping similar words together in topics rather than using each word as a feature
  • Recommender Systems – Using a similarity measure we can build recommender systems. If our system would recommend articles for readers, it will recommend articles with a topic structure similar to the articles the user has already read.
  • Uncovering Themes in Texts – Useful for detecting trends in online publications for example

Would a recommender system be useful for reading email dumps? 😉

Within or across candidates for Congress?

Secrets to Searching for Video Footage (AI Assistance In Your Future?)

Filed under: Artificial Intelligence,Deep Learning,Journalism,News,Reporting,Searching — Patrick Durusau @ 11:24 am

Secrets to Searching for Video Footage by Aric Toler.

From the post:

Much of Bellingcat’s work requires intense research into particular events, which includes finding every possible photograph, video and witness account that will help inform our analysis. Perhaps most notably, we exhaustively researched the events surrounding the shoot down of Malaysian Airlines Flight 17 (MH17) over eastern Ukraine.

The photographs and videos taken near the crash in eastern Ukraine were not particularly difficult to find, as they were widely publicized. However, locating over a dozen photographs and videos of the Russian convoy transporting the Buk anti-aircraft missile launcher that shot down MH17 three weeks before the tragedy was much harder, and required both intense investigation on social networks and some creative thinking.

Most of these videos were shared on Russian-language social networks and YouTube, and did not involve another type of video that is much more important today than it was in 2014 — live streaming. Bellingcat has also made an effort to compile all user-generated videos of the events in Charlottesville on August 12, 2017, providing a database of livestreamed videos on platforms like Periscope, Ustream and Facebook Live, along with footage uploaded after the protest onto platforms like Twitter and YouTube.

Verifying videos is important, as detailed in this Bellingcat guide, but first you have to find them. This guide will provide advice and some tips on how to gather as much video as possible on a particular event, whether it is videos from witnesses of a natural disaster or a terrorist attack. For most examples in this guide, we will assume that the event is a large protest or demonstration, but the same advice is applicable to other events.

I was amused by this description of Snapchat and Instagram:


Snapchat and Instagram are two very common sources for videos, but also two of the most difficult platforms to trawl for clips. Neither has an intuitive search interface that easily allows researchers to sort through and collect videos.

I’m certain that’s true but a trained AI could sort out videos obtained by overly broad requests. As I’m fond of pointing out, not 100% accuracy but you can’t get that with humans either.

Augment your searching with a tireless AI. For best results, add or consult a librarian as well.

PS: I have other concerns at the moment but a subset of the Bellingcat Charlottesville database would make a nice training basis for an AI, which could then be loosed on Instagram and other sources to discover more videos. The usual stumbling block for AI projects being human curated material, which Bellingcat has already supplied.

Leaking Resources for Federal Employees with Ties to ‘Shithole’ Countries

Filed under: Journalism,Leaks,News,Reporting — Patrick Durusau @ 10:58 am

Trump derides protections for immigrants from ‘shithole’ countries by Josh Dawsey.

From the post:

President Trump grew frustrated with lawmakers Thursday in the Oval Office when they discussed protecting immigrants from Haiti, El Salvador and African countries as part of a bipartisan immigration deal, according to several people briefed on the meeting.

“Why are we having all these people from shithole countries come here?” Trump said, according to these people, referring to countries mentioned by the lawmakers.

The EEOC Annual report for 2014 reports out of 2.7 million women and men employed by the federal government:

…63.50% were White, 18.75% were Black or African American 8.50% were Hispanic or Latino, 6.16% were Asian, 1.49% were American Indian or Alaska Native, 1.16% were persons of Two or More Races and 0.45% were Native Hawaiian or Other Pacific Islander…(emphasis added)

In other words, 27.25% of 2.7 million people working for the federal government, or approximately 794,000 federal employees have ties ‘shithole’ countries.

President Trump’s rude remarks are an accurate reflection of current U.S. immigration policy:

The United States treats other countries ‘shitholes’ but it is considered impolite to mention that in public.

Federal employees with ties to ‘shithole’ countries are at least as loyal, if not more so, than your average staffer.

That said, I’m disappointed that media outlets did not immediately call upon federal employees with ties to ‘shithole’ countries to start leaking documents/data.

Here are some places documents can be leaked to:

More generally, see Here’s how to share sensitive leaks with the press and their excellent listing of SecureDrop resources for anonymous submission of documents.

If you have heard of the Panama Papers or the Paradise Papers, then you are thinking about the International Consortium of Investigative Journalists. They do excellent work, but like the other journalists mentioned, are obsessed with being in control of the distribution of your leak.

Every outrage, whether a shooting, unjust imprisonment, racist remarks, religious bigotry, is an opportunity to incite leaking by members of a group.

Not calling for leaking speaks volumes about your commitment to the status quo and its current injustices.

January 11, 2018

The art of writing science

Filed under: Conferences,Science,Writing — Patrick Durusau @ 4:21 pm

The art of writing science by Kevin W. Plaxco

From the post:

The value of writing well should not be underestimated. Imagine, for example, that you hold in your hand two papers, both of which describe precisely the same set of experimental results. One is long, dense, and filled with jargon. The other is concise, engaging, and easy to follow. Which are you more likely to read, understand, and cite? The answer to this question hits directly at the value of good writing: writing well leverages your work. That is, while even the most skillful writing cannot turn bad science into good science, clear and compelling writing makes good science more impactful, and thus more valuable.

The goal of good writing is straightforward: to make your reader’s job as easy as possible. Realizing this goal, though, is not so simple. I, for one, was not a natural-born writer; as a graduate student, my writing was weak and rambling, taking forever to get to the point. But I had the good fortune to postdoc under an outstanding scientific communicator, who taught me the above-described lesson that writing well is worth the considerable effort it demands. Thus inspired, I set out to teach myself how to communicate more effectively, an effort that, some fifteen years later, I am still pursuing.

Along the way I have learned a thing or two that I believe make my papers easier to read, a few of which I am pleased to share with you here. Before I share my hard-won tips, though, I have an admission: there is no single, correct way to write. In fact, there are a myriad of solutions to the problem of writing well (see, e.g., Refs.1–4). The trick, then, is not to copy someone else’s voice, but rather to study what works—and what does not—in your own writing and that of others to formulate your own guide to effective communication. Thus, while I present here some of my most cherished writing conventions (i.e., the rules that I force on my own students), I do not mean to imply that they represent the only acceptable approach. Indeed, you (or your mentor) may disagree strongly with many of the suggestions I make below. This, though, is perfectly fine: my goal is not to convince you that I have found the one true way, but instead simply to get people thinking and talking about writing. I do so in the hope that this will inspire a few more young scientists to develop their own effective styles.

The best way to get the opportunity to do a great presentation for Balisage 2018 is to write a great paper for Balisage 2018. A great paper is step one towards being accepted and having a chance to bask in the admiration of other markup geeks.

OK, so it’s not so much basking as trying to see by star light on a cloudy night.

Still, a great paper will impress the reviewers and if accepted, readers when it appears in the proceedings for this year.

Strong suggestion: Try Plaxco’s first sentence of the paragraph test on your paper (or any you are reviewing). If if fails, start over.

I volunteer to do peer review for Balisage so I’m anticipating some really well-written papers this year.

The David Attenborough Style of Scientific Presentation (Historic First for Balisage?)

Filed under: Communication,Conferences,Presentation — Patrick Durusau @ 4:17 pm

The David Attenborough Style of Scientific Presentation by Will Ratcliff.

From the post:

One of the biggest hurdles to giving a good talk is convincing people that it’s worth their mental energy to listen to you. This approach to speaking is designed to get that buy-in from the audience, without them even realizing they are doing so. The key to this is exploitation of a simple fact: people are curious creatures by nature and will pay attention to a cool story as long as that story remains absolutely clear.

In the D.A. style of speaking, you are the narrator of an interesting story. The goal is to have a visually streamlined talk where the audience is so engaged with your presentation that they forget you’re standing in front of them speaking. Instead, they’re listening to your narrative and seeing the visuals that accompany your story, at no point do they have to stop and try to make sense of what you just said.

A captivating two (2) page summary of the David Attenborough (DA) style for presentations, but at first, since I don’t travel any longer, I wasn’t going to mention it.

On a second or third read, the blindingly obvious hit me:

Rules that work for live conference presentations, also work for video podcasts, lectures, client presentations, anywhere you are seeking to effectively communicate to others. (I guess that rules out White House press briefings.)

Paper submission dates aren’t out yet for Balisage 2018 but your use of DA style for your presentation would be a historic first, so far as I know. 😉

No promises but a video in “normal” style with the same presentation in DA style, for the same presentation, could be an interesting data point.

Introduction to reverse engineering and Assembly (Suicidal Bricking by Ubuntu Servers)

Filed under: Assembly,Cybersecurity,Reverse Engineering,Security — Patrick Durusau @ 4:05 pm

Introduction to reverse engineering and Assembly by Youness Alaoui.

From the post:

Recently, I’ve finished reverse engineering the Intel FSP-S “entry” code, that is from the entry point (FspSiliconInit) all the way to the end of the function and all the subfunctions that it calls. This is only some initial foray into reverse engineering the FSP as a whole, but reverse engineering is something that takes a lot of time and effort. Today’s blog post is here to illustrate that, and to lay the foundations for understanding what I’ve done with the FSP code (in a future blog post).

Over the years, many people asked me to teach them what I do, or to explain to them how to reverse engineer assembly code in general. Sometimes I hear the infamous “How hard can it be?” catchphrase. Last week someone I was discussing with thought that the assembly language is just like a regular programming language, but in binary form—it’s easy to make that mistake if you’ve never seen what assembly is or looks like. Historically, I’ve always said that reverse engineering and ASM is “too complicated to explain” or that “If you need help to get started, then you won’t be able to finish it on your own” and various other vague responses—I often wanted to explain to others why I said things like that but I never found a way to do it. You see, when something is complex, it’s easy to say that it’s complex, but it’s much harder to explain to people why it’s complex.

I was lucky to recently stumble onto a little function while reverse engineering the Intel FSP, a function that was both simple and complex, where figuring out what it does was an interesting challenge that I can easily walk you through. This function wasn’t a difficult thing to understand, and by far, it’s not one of the hard or complex things to reverse engineer, but this one is “small and complex enough” that it’s a perfect example to explain, without writing an entire book or getting into the more complex aspects of reverse engineering. So today’s post serves as a “primer” guide to reverse engineering for all of those interested in the subject. It is a required read in order to understand the next blog posts I would be writing about the Intel FSP. Ready? Strap on your geek helmet and let’s get started!
… (emphasis in original)

Intel? Intel? I heard something recently about Intel chips. You? 😉

No, this won’t help you specifically with Spectre and Meltdown, but it’s a step in the direction of building such skills.

The Project Zero team at Google did not begin life with the skills necessary to discover Spectre and Meltdown.

It took 20 years for those vulnerabilities to be discovered.

What vulnerabilities await discovery by you?

PS: Word on the street is that Ubuntu 16.04 servers are committing suicide rather than run more slowly with patches for Meltdown and Spectre. Meltdown and Spectre Patches Bricking Ubuntu 16.04 Computers. The attribution of intention to Ubuntu servers may be a bit overdone but the bricking part is true.

W. E. B. Du Bois as Data Scientist

Filed under: Data Science,Social Sciences,Socioeconomic Data,Visualization — Patrick Durusau @ 3:51 pm

W. E. B. Du Bois’s Modernist Data Visualizations of Black Life by Allison Meier.

From the post:

For the 1900 Exposition Universelle in Paris, African American activist and sociologist W. E. B. Du Bois led the creation of over 60 charts, graphs, and maps that visualized data on the state of black life. The hand-drawn illustrations were part of an “Exhibit of American Negroes,” which Du Bois, in collaboration with Thomas J. Calloway and Booker T. Washington, organized to represent black contributions to the United States at the world’s fair.

This was less than half a century after the end of American slavery, and at a time when human zoos displaying people from colonized countries in replicas of their homes were still common at fairs (the ruins of one from the 1907 colonial exhibition in Paris remain in the Bois de Vincennes). Du Bois’s charts (recently shared by data artist Josh Begley on Twitter) focus on Georgia, tracing the routes of the slave trade to the Southern state, the value of black-owned property between 1875 and 1889, comparing occupations practiced by blacks and whites, and calculating the number of black students in different school courses (2 in business, 2,252 in industrial).

Ellen Terrell, a business reference specialist at the Library of Congress, wrote a blog post in which she cites a report by Calloway that laid out the 1900 exhibit’s goals:

It was decided in advance to try to show ten things concerning the negroes in America since their emancipation: (1) Something of the negro’s history; (2) education of the race; (3) effects of education upon illiteracy; (4) effects of education upon occupation; (5) effects of education upon property; (6) the negro’s mental development as shown by the books, high class pamphlets, newspapers, and other periodicals written or edited by members of the race; (7) his mechanical genius as shown by patents granted to American negroes; (8) business and industrial development in general; (9) what the negro is doing for himself though his own separate church organizations, particularly in the work of education; (10) a general sociological study of the racial conditions in the United States.

Georgia was selected to represent these 10 points because, according to Calloway, “it has the largest negro population and because it is a leader in Southern sentiment.” Rebecca Onion on Slate Vault notes that Du Bois created the charts in collaboration with his students at Atlanta University, examining everything from the value of household and kitchen furniture to the “rise of the negroes from slavery to freedom in one generation.”

The post is replete with images created by Du Bois for the exposition, of which this is an example:

As we all know, but rarely say in public, data science and visualization of data isn’t a new discipline.

The data science/visualization by Du Bois merits notice during Black History month (February) but the rest of the year as well. It’s part of our legacy in data science and we should be proud of it.

The Watchdog Press As Lapdog Press

Filed under: Journalism,Law,News,Reporting — Patrick Durusau @ 3:42 pm

When Intelligence Agencies Make Backroom Deals With the Media, Democracy Loses by Bill Blunden.

From the post:

Steven Spielberg’s new movie The Post presents the story behind Katharine Graham’s decision to publish the Pentagon Papers in The Washington Post. As the closing credits roll, one is left with the impression of a publisher who adopts an adversarial stance towards powerful government officials. Despite the director’s $50 million budget (or, perhaps, because of it), there are crucial details that are swept under the rug — details that might lead viewers towards a more accurate understanding of the relationship between the mainstream corporate press and the government.

The public record offers some clarity. Three years after Graham decided to go public with the Pentagon Papers, Seymour Hersh revealed a Central Intelligence Agency (CIA) program called Operation CHAOS in The New York Times. Hersh cited inside sources who described “a massive, illegal domestic intelligence operation during the Nixon Administration against the antiwar movement and other dissident groups in the United States.” Hersh’s article on CIA domestic operations is pertinent because, along with earlier revelations by Christopher Pyle, it prompted the formation of the Church Commission.

The Church Commission was chartered to examine abuses by United States intelligence agencies. In 1976, the commission’s final report (page 455 of Book I, entitled “Foreign and Military Intelligence”) found that the CIA maintained “a network of several hundred foreign individuals around the world who provide intelligence for the CIA and at times attempt to influence opinion through the use of covert propaganda” and that “approximately 50 of the [Agency] assets are individual American journalists or employees of US media organizations.”

These initial findings were further corroborated by Carl Bernstein, who unearthed a web of “more than 400 American journalists who in the past twenty‑five years have secretly carried out assignments for the Central Intelligence Agency.” Note that Bernstein was one of the Washington Post journalists who helped to expose the Watergate scandal. He published his piece on the CIA and the media with Rolling Stone magazine in 1977.

Show of hands. How many of you think the CIA, which freely violates surveillance and other laws, has not continued to suborn journalists, up to and including now?

Despite a recent assurance from someone whose opinion I value, journalists operating on a shoe-string have no corner on the public interest. Nor is that a guarantee they don’t have their own agendas.

Money is just one source of corruption. Access to classified information, pretige in the profession, deciding whose newsworthy and who is not, power over other reporters, are all factors that don’t operate in the public interest.

My presumption about undisclosed data in the possession of reporters accords with the State of Georgia, 24-4-22. Presumption from failure to produce evidence:

If a party has evidence in his power and within his reach by which he may repel a claim or charge against him but omits to produce it, or if he has more certain and satisfactory evidence in his power but relies on that which is of a weaker and inferior nature, a presumption arises that the charge or claim against him is well founded; but this presumption may be rebutted.

In short, evidence you don’t reveal is presumed to be against you.

That has worked for centuries in courts, why would I apply a different standard to reporters (or government officials)?

Fact Forward: Fact Free Assault on Online Misinformation

Filed under: Fake News,Journalism,News,Reporting — Patrick Durusau @ 3:00 pm

Fact Forward: If you had $50,000, how would you change fact-checking?

From the post:

The International Fact-Checking Network wants to support your next big idea.

We recognize the importance of making innovation a key part of fact-checking in the age of online misinformation and we are also aware that innovation requires investment. For those reasons, we are opening Fact Forward. A call for fact-checking organizations and/or teams of journalists, designers, developers or data scientists to submit projects that can represent a paradigmatic innovation for fact-checkers in any of these areas: 1) formats, 2) business models 3) technology-assisted fact-checking.

With Fact Forward, the IFCN will grant 50,000 USD to the winning project.

For this fund, an innovative project is defined as one that provides a distinct, novel user experience that seamlessly integrates content, design, and business strategy. The innovation should serve both the audience and the organization.

The vague definition of “innovative project” leaves the impression the judges have no expertise with software development. A quick check of the judges credentials reveals that is indeed the case. Be forewarned, fluffy pro-fact checking phrases are likely to outweigh any technical merit in your proposals.

If you doubt this is an ideological project, consider the implied premises of “…the age of online misinformation….” Conceding that online misinformation does exist, those include:

1. Online misinformation influences voters:

What evidence does exist, is reported by Hunt Allcott, Matthew Gentzkow in Social Media and Fake News in the 2016 Election, the astract reads:

Following the 2016 U.S. presidential election, many have expressed concern about the effects of false stories (“fake news”), circulated largely through social media. We discuss the economics of fake news and present new data on its consumption prior to the election. Drawing on web browsing data, archives of fact-checking websites, and results from a new online survey, we find: (i) social media was an important but not dominant source of election news, with 14 percent of Americans calling social media their “most important” source; (ii) of the known false news stories that appeared in the three months before the election, those favoring Trump were shared a total of 30 million times on Facebook, while those favoring Clinton were shared 8 million times; (iii) the average American adult saw on the order of one or perhaps several fake news stories in the months around the election, with just over half of those who recalled seeing them believing them; and (iv) people are much more likely to believe stories that favor their preferred candidate, especially if they have ideologically segregated social media networks.

Or as summarized in Don’t blame the election on fake news. Blame it on the media by Duncan J. Watts and David M. Rothschild:


In addition, given what is known about the impact of online information on opinions, even the high-end estimates of fake news penetration would be unlikely to have had a meaningful impact on voter behavior. For example, a recent study by two economists, Hunt Allcott and Matthew Gentzkow, estimates that “the average US adult read and remembered on the order of one or perhaps several fake news articles during the election period, with higher exposure to pro-Trump articles than pro-Clinton articles.” In turn, they estimate that “if one fake news article were about as persuasive as one TV campaign ad, the fake news in our database would have changed vote shares by an amount on the order of hundredths of a percentage point.” As the authors acknowledge, fake news stories could have been more influential than this back-of-the-envelope calculation suggests for a number of reasons (e.g., they only considered a subset of all such stories; the fake stories may have been concentrated on specific segments of the population, who in turn could have had a disproportionate impact on the election outcome; fake news stories could have exerted more influence over readers’ opinions than campaign ads). Nevertheless, their influence would have had to be much larger—roughly 30 times as large—to account for Trump’s margin of victory in the key states on which the election outcome depended.

Just as one example, online advertising is routinely studied, Understanding Interactive Online Advertising: Congruence and Product Involvement in Highly and Lowly Arousing, Skippable Video Ads by Daniel Belanche, Carlos Flavián, Alfredo Pérez-Rueda. But the IFCN offers no similar studies for what it construes as “…online misinformation….”

Without some evidence for and measurement of the impact of “…online misinformation…,” what is the criteria for success for your project?

2. Correcting online misinformation influences voters:

The second, even more problematic assumption in this project is that correcting online misinformation influences voters.

Facts, even “correct” facts do a poor job of changing opinions. Even the lay literature is legion on this point: Facts Don’t Change People’s Minds. Here’s What Does; Why Facts Don’t Change Our Minds; The Backfire Effect: Why Facts Don’t Win Arguments; In the battle to change people’s minds, desires come before facts; The post-fact era.

Any studies to the contrary? Surely the IFCN has some evidence that correcting misinformation changes opinions or influences voter behavior?

(I reserve this space for any studies supplied by the IFCN or others to support that premise.)

I don’t disagree with fact checking per se. Readers should be able to rely upon representations of fact. But Glenn Greenwald’s The U.S. Media Suffered Its Most Humiliating Debacle in Ages and Now Refuses All Transparency Over What Happened makes it clear that misinformation isn’t limited to appearing online.

One practical suggestion: If $50,000 is enough for your participation in an ideological project, use sentiment analysis to identify pro-Trump materials. Anything “pro-Trump” is, for some funders, “misinformation.”

PS: I didn’t vote for Trump and loathe his administration. However, pursuing fantasies to explain his victory in 2016 won’t prevent a repeat of same in 2020. Whether he is defeated with misinformation or correct information makes no difference to me. His defeat is the only priority.

Practical projects with a defeat of Trump in 2020 goal are always of interest. Ping me.

January 10, 2018

Tails With Meltdown and Spectre Fixes w/ Caveats

Filed under: Cybersecurity,Security,Tails — Patrick Durusau @ 4:59 pm

Tails 3.4 is out

From the post:


In particular, Tails 3.4 fixes the widely reported Meltdown attack, and includes the partial mitigation for Spectre.

Timely security patches are always good news.

Three caveats:

1. Meltdown and Spectre patches originate in the same community that missed these vulnerabilities for twenty-odd years. How confident are you in these patches?

2. Meltdown and Spectre are more evidence for the existence of other fundamental design flaws than we have for life on other planets.

3. When did the NSA become aware of Meltdown and Spectre?

eXist-db – First Upgrade for 2018

Filed under: eXist,XML,XML Database,XQuery — Patrick Durusau @ 2:06 pm

I usually update from notices of a new version and so rarely visit the eXist-db homepage. My loss.

There’s a cool homepage image. With links to documentation, community, references, but not overwhelmingly so.

Kudos! Oh, the upgrade:

eXist-db v3.6.1 – January 03, 2018

From the release notes:

eXist-db v3.6.1 has just been released. This is a hotfix release, which contains bug fixes for several important issues discovered since eXist-db v3.6.0.

We recommend that all users of eXist 3.6.0 should upgrade to eXist 3.6.1.

Bug fixes

  • Fixed issue where the package manager wrote non-well-formed XML that caused problems during backup/restore. #1620
  • Fixed namespace prefix for attributes and namespace nodes.
  • Made sure the localName of a in memory element is correctly obtained under various namespace declaration conditions
  • Fix for NPE in org.exist.xquery.functions.fn.FunId #1642
  • Several atomic comparisons raise wrong error code #1638
  • General comparison to empty sequence sometimes raises an error #1639
  • Warn if no <target> is found in an EXPath packages’s repo.xml

Backwards Compatibility

  • eXist-db v3.6.1 is backwards binary-compatible as far as v3.0, but not with earlier versions. Users upgrading from previous versions should perform a full backup and restore to migrate their data.

Downloading This Version

eXist-db v3.6.1 is available for download from Bintray. Maven artifacts for eXist-db v3.6.1 are available from our mvn-repo. Mac users of the Homebrew package repository may acquire eXist 3.6.1 directly from there.

Downloading This Version

eXist-db v3.6.1 is available for download from Bintray. Maven artifacts for eXist-db v3.6.1 are available from our mvn-repo. Mac users of the Homebrew package repository may acquire eXist 3.6.1 directly from there.

When 2018 congressional candidate (U.S.) inboxes start dropping, will eXist-db be your tool of choice?

Enjoy!

Women of Islamic Studies

Filed under: Islam — Patrick Durusau @ 1:53 pm

Women of Islamic Studies by Dr. Kristian Petersen.

From the webpage:

Women of Islamic Studies is a crowdsourced database of women scholars who work on Muslims and Islam. This ongoing project is in its beta version. Once sufficient data has been collected I will partner with a university for a more stable home.

Women of Islamic Studies is intended to contest the prevalence of all-male and male dominated academic domains, such as editorial boards, conference panels, publications, guest speakers, bibliographies, books reviews, etc. and provide resources to support the recognition, citation, and inclusion of women scholars in the field of Islamic Studies. Anyone who identifies as a woman, gender non-conforming, or non-binary is welcomed on the list. The scholars listed come from a wide variety of disciplines and perspectives. “Islamic Studies” is meant to be as inclusive as possible, meaning anyone whose expertise is related to the understanding of Muslims and the Islamic tradition, and intended to demarcate a disciplinary boundary. Please feel free to list any relevant scholars who work on Islam and Muslims in any capacity. The crowdsourced contents are made possible by many contributors. Please add to our list and help spread the word.

I have contacted my graduate school Arabic professor to ask if she wants to join this list.

Who are you going to ask to join? Failing that, spread the word!

Source Community Call | January 11, 2018 | Thursday @ 12pm ET – GMT 5pm – 9am PDT

Filed under: Journalism,News,Reporting — Patrick Durusau @ 12:56 pm

A resource sponsored by OpenNews, which self-describes as:

At OpenNews, we believe that a community of peers working, learning and solving problems together can create a stronger, more representative, and ascendant journalism. We organize events and community supports to strengthen and sustain this ecosystem.

  • In collaboration with writers and developers in newsrooms around the world, we publish Source, a community site focused on open technology projects and process in journalism. From features that explore the context behind the code to targeted job listings that help the community expand, Source presents the people, projects, and insights behind journalism code.

    We also hold biweekly Source community calls where newsroom data and apps teams can share their work, announce job openings, and find collaborators.

On the agenda for tomorrow:

  • Reporting on police shootings – Allison McCann
  • Accessibility on the web – Joanna Kao

Call Details for Jan. 11, 2018..

Archive of prior calls

Mark your calendars!: Every-other Thursday @ 12pm ET – GMT 5pm – 9am PDT

Email Spam from Congress

Filed under: Government,Journalism,News — Patrick Durusau @ 10:40 am

Receive an Email when a Member of Congress has a New Remark Printed in the Congressional Record by Robert Brammer.

From the post:

Congress.gov alerts are emails sent to you when a measure (bill or resolution), nomination, or member profile has been updated with new information. You can also receive an email after a Member has new remarks printed in the Congressional Record. Here are instructions on how to get an email after a Member has new remarks printed in the Congressional Record….

My blog title is unfair to Brammer, who isn’t responsible for the lack of meaningful content in Member remarks printed in the Congressional Record.

Local news outlets reprint such remarks, as does the national media, whether those remarks are grounded in any shared reality or not. Secondary education classes on current events, reporting, government, where such remarks are considered meaningful, are likely to find this useful.

Another use, assuming mining of prior remarks from the Congressional Record, would be in teaching NLP techniques. Highly unlikely you will discover anything new but it will be “new to you” and the result of your own efforts.

January 9, 2018

Top 5 Cloudera Engineering Blogs of 2017

Filed under: Cloudera,Impala,Kafka,Spark — Patrick Durusau @ 8:22 pm

Top 5 Cloudera Engineering Blogs of 2017

From the post:

1. Working with UDFs in Apache Spark

2. Offset Management For Apache Kafka With Apache Spark Streaming

3. Performance comparison of different file formats and storage engines in the Apache Hadoop ecosystem

4. Up and running with Apache Spark on Apache Kudu

5. Apache Impala Leads Traditional Analytic Database

Kudos to Cloudera for a useful list of “top” blog posts for 2017.

We might disagree on the top five but it’s a manageable number of posts and represents the quality of Cloudera postings all year long.

Enjoy!

Sessions for XML Prague 2018 – January 10th, Early Bird Deadline!

Filed under: Conferences,XML,XQuery,XSLT — Patrick Durusau @ 8:03 pm

List of sessions for XML Prague 2018

The range of great presentations is no surprise.

That early registration is still open, with this list of presentations, well, that is a surprise!

January 10, 2018 is the deadline for early birds!

From the post:

Unconference day

Schematron Users Meetup
XSL-FO, CSS and Paged Output – hosted by Antenna House
Introduction to CSS for Paged Media
XSpec Users Meetup
oXygen Users Meeup
Creating beautiful documents with the speedata Publisher
eXist-db Community Meetup
XML with Emacs workshop

Friday and Saturday sessions

Bert Willems: Assisted Structured Authoring using Conditional Random Fields
Christophe Marchand and Matthieu Ricaud-Dussarget: Using Maven with XML Projects
Elli Bleeker, Bram Buitendijk, Ronald Haentjens Dekker and Astrid Kulsdom: Including XML Markup in the Automated Collation of Literary Texts
Erik Siegel: Multi-layered content modelling to the rescue
Francis Cave: Does the world need more XML standards?
Gerrit Imsieke: tokenized-to-tree – An XProc/XSLT Library For Patching Back Tokenization/Analysis Results Into Marked-up Text
Hans-Juergen Rennau: Combining graph and tree: writing SHAX, obtaining SHACL, XSD and more
James Fuller: Diff with XQuery
Jean-François Larvoire: SML – A simpler and shorter representation of XML
Johannes Kolbe and Manuel Montero: XML periodic table, XML repository and XSLT checker
Michael Kay: XML Tree Models for Efficient Copy Operations
O’Neil Delpratt and Debbie Lockett: Implementing XForms using interactive XSLT 3:0
Pieter Masereeuw: Can we create a real world rich Internet application using Saxon-JS?
Radu Coravu: A short story about XML encoding and opening very large documents in an XML editing application
Steven Higgs: XML Success Story: Creating and Integrating Collaboration Solutions to Improve the Documentation Process
Steven Pemberton: Form, and Content
Tejas Barhate and Nigel Whitaker: Varieties of XML Merge: Concurrent versus Sequential
Tony Graham: Life, the Universe, and CSS Tests
Vasu Chakkera: Effective XSLT Documentation and its separation from XSLT code:
Zachary Dean: xqerl: XQuery 3:1 Implementation in Erlang

I’m expecting lots of tweets and posts about these presentations!

January 8, 2018

Are LaTeX Users Script Kiddies?

Filed under: Cybersecurity,Security,TeX/LaTeX — Patrick Durusau @ 5:15 pm

NO! Despite most LaTeX users not writing their own LaTeX engines or many of the packages they use, they are not script kiddies.

LaTeX users are experts in mathematics, statistics and probability, physics, computer science, astronomy and astrophysics, (François Brischoux and Pierre Legagneux 2009), as well as being skilled LaTeX authors.

There’s no shame in using LaTeX, despite not implementing a LaTeX engine. LaTeX makes high quality typesetting available to hundreds of thousands of users around the globe.

Contrast that view of LaTeX with making use of cyber vulnerabilities more widely available, which is dismissed as empowering “script kiddies.”

Every cyber vulnerability is a step towards transparency. Government and corporations fear cyber vulnerabilities, fearing their use will uncover evidence of their crimes and favoritism.

Fearing public exposure, it’s no surprise that governments prohibit the use of cyber vulnerabilities. Governments that also finance and support rape, torture, murder, etc., in pursuit of national policy.

The question for you is:

Do you want to assist such governments and corporations to continue hiding their secrets?

Your answer to that question should determine your position on the discovery, use and spread of cyber vulnerabilities.

16K+ Hidden Web Services (CSV file)

Filed under: Dark Web — Patrick Durusau @ 5:00 pm

I subscribe to the Justin at Hunchly Dark Web report. The current issue (daily) and archive are on Dropbox.

The daily issues are archived in .xlsx format. (Bleech!)

Yesterday I grabbed the archive, converted the files to CSV format, catted them together, cleaned up the extra headers and that resulted in a file with 16,814 links. HiddenServices-2017-07-13-2018-01-05.zip.

A number of uses come to mind, seed list for seach engine, browsing by title, sub-setting for more specialized dark web lists, testing presence/absence of sites on sub-lists, etc.

I’m not affliated with Hunch.ly but you should give their Inspector Hunchly a look. From the webpage:

Inspector Hunchly toils in the background of your web browser to track, analyze and store web pages while you perform online investigations.

Forgets nothing, keeps everything.
… (emphasis in original)

When using Inspector Hunchly, be mindful that: Anything you record, can and will be discovered.

PS: The archive I downloaded, separate files for every day, 272.3 MB. My one file, 363.8 KB. Value added?

forall x : …Introduction to Formal Logic (Smearing “true” across formal validity and factual truth)

Filed under: Logic,Ontology — Patrick Durusau @ 4:47 pm

forall x : Calgary Remix An Introduction to Formal Logic by P. D. Magnus, Tim Button, with additions by, J. Robert Loftis, remixed and revised by
Aaron Thomas-Bolduc and Richard Zach.

From the introduction:

As the title indicates, this is a textbook on formal logic. Formal logic concerns the study of a certain kind of language which, like any language, can serve to express states of affairs. It is a formal language, i.e., its expressions (such as sentences) are defined formally. This makes it a very useful language for being very precise about the states of affairs its sentences describe. In particular, in formal logic is is impossible to be ambiguous. The study of these languages centres on the relationship of entailment between sentences, i.e., which sentences follow from which other sentences. Entailment is central because by understanding it better we can tell when some states of affairs must obtain provided some other states of affairs obtain. But entailment is not the only important notion. We will also consider the relationship of being consistent, i.e., of not being mutually contradictory. These notions can be defined semantically, using precise definitions of entailment based on interpretations of the language—or proof-theoretically, using formal systems of deduction.

Formal logic is of course a central sub-discipline of philosophy, where the logical relationship of assumptions to conclusions reached from them is important. Philosophers investigate the consequences of definitions and assumptions and evaluate these definitions and assumptions on the basis of their consequences. It is also important in mathematics and computer science. In mathematics, formal languages are used to describe not “everyday” states of affairs, but mathematical states of affairs. Mathematicians are also interested in the consequences of definitions and assumptions, and for them it is equally important to establish these consequences (which they call “theorems”) using completely precise and rigorous methods. Formal logic provides such methods. In computer science, formal logic is applied to describe the state and behaviours of computational systems, e.g., circuits, programs, databases, etc. Methods of formal logic can likewise be used to establish consequences of such descriptions, such as whether a circuit is error-free, whether a program does what it’s intended to do, whether a database is consistent or if something is true of the data in it….

Unfortunately, formal logic uses “true” for a conclusion that is valid upon a set of premises.

That smearing of “true” across formal validity and factual truth, enables ontologists to make implicit claims about factual truth, ever ready to retreat into “…all I meant was formal validity.”

Premises, within and without ontologies, are known carriers of discrimination and prejudice. Don’t be distracted by “formal validity” arguments. Keep a laser focus on claimed premises.

Bait Avoidance, Congress, Kaspersky Lab

Filed under: Cybersecurity,Government,Politics,Security — Patrick Durusau @ 2:56 pm

Should you use that USB key you found? by Jeffrey Esposito.

Here is a scenario for you: You are walking around, catching Pokémon, getting fresh air, people-watching, taking Fido out to do his business, when something catches your eye. It’s a USB stick, and it’s just sitting there in the middle of the sidewalk.

Jackpot! Christmas morning! (A very small) lottery win! So, now the question is, what is on the device? Spring Break photos? Evil plans to rule the world? Some college kid’s homework? You can’t know unless…

Esposito details an experiement leaving USB keys about at University of Illinois resulted in 48% of them being plugged into computers.

Reports like this from Kaspersky Lab, given the interest in Kaspersky by Congress, could lead to what the pest control industry calls “bait avoidance.”

Imagine members of Congress or their staffs not stuffing random USB keys into their computers. This warning from Kaspersky could poison the well for everyone.

For what it’s worth, salting the halls and offices of Congress with new release music and movies on USB keys, may help develop and maintain insecure USB practices. Countering bait avoidance is everyone’s responsibility.

January 6, 2018

21 Recipes for Mining Twitter Data with rtweet

Filed under: R,Social Media,Tweets,Twitter — Patrick Durusau @ 5:26 pm

21 Recipes for Mining Twitter Data with rtweet by Bob Rudis.

From the preface:

I’m using this as way to familiarize myself with bookdown so I don’t make as many mistakes with my web scraping field guide book.

It’s based on Matthew R. Russell’s book. That book is out of distribution and much of the content is in Matthew’s “Mining the Social Web” book. There will be many similarities between his “21 Recipes” book and this book on purpose. I am not claiming originality in this work, just making an R-centric version of the cookbook.

As he states in his tome, “this intentionally terse recipe collection provides you with 21 easily adaptable Twitter mining recipes”.

Rudis has posted about this editing project at: A bookdown “Hello World” : Twenty-one (minus two) Recipes for Mining Twitter with rtweet, which you should consult if you want to contribute to this project.

Working through 21 Recipes for Mining Twitter Data with rtweet will give you experience proofing a text and if you type in the examples (no cut-n-paste), you’ll develop rtweet muscle memory.

Enjoy!

January 5, 2018

…Anyone With Less Technical Knowledge…

Filed under: Cybersecurity,Security — Patrick Durusau @ 5:17 pm

The headline came from Critical “Same Origin Policy” Bypass Flaw Found in Samsung Android Browser by Mohit Kumar, the last paragraph which reads:


Since the Metasploit exploit code for the SOP bypass vulnerability in the Samsung Internet Browser is now publicly available, anyone with less technical knowledge can use and exploit the flaw on a large number of Samsung devices, most of which are still using the old Android Stock browser.
… (emphasis added)

Kumar tosses off the … anyone with less technical knowledge … line like that’s a bad thing.

I wonder if Kumar can:

  1. Design and create a CPU chip?
  2. Design and create a memory chip?
  3. Design and create from scratch a digital computer?
  4. Design and implement an operating system?
  5. Design and create a programming language?
  6. Design and create a compiler for creation of binaries?
  7. Design and create the application he now uses for editing?

I’m guessing that Kumar strikes out on one or more of those questions, making him one of those anyone with less technical knowledge types.

I don’t doubt Kumar has a wide range of deep technical skills but lacking some particular technical skill doesn’t diminish your value as a person or even as a technical geek.

Moreover, security failures should be made as easy to use as possible.

No corporation or government is going to voluntarily engage in behavior changing transparency. The NSA was outed for illegal surveillance, Congress then passes a law making that illegal surveillance retroactively legal and when that authorization expired, the NSA continued its originally illegal surveillance.

Every security vulnerability is one potential step towards behavior changing transparency. People with “…less technical knowledge…” aren’t going to find those but with assistance, they can make the best use of the ones that are found.

Security researchers should take pride in their work. But there’s no reflected glory in dissing people who are good at other things.

Transparency, behavior changing transparency, will only result from discovery and widespread use of security flaws. (Voluntary transparency being a contradiction in terms.)

January 4, 2018

Helping Google Achieve Transparency – Wage Discrimination

Filed under: sexism,Transparency — Patrick Durusau @ 8:36 pm

Google faces new discrimination charge: paying female teachers less than men by Sam Levin.

From the post:

Google, which has been accused of systematically underpaying female engineers and other workers, is now facing allegations that it discriminated against women who taught employees’ children at the company’s childcare center.

A former employee, Heidi Lamar, is alleging in a complaint that female teachers were paid lower salaries than men with fewer qualifications doing the same job.

Lamar, who worked at Google for four years before quitting in 2017, alleged that the technology company employed roughly 147 women and three men as pre-school teachers, but that two of those men were granted higher starting salaries than nearly all of the women.

Google did not respond to the Guardian’s request for data on its hiring practices of teachers.

As Levin reports, Google is beside itself with denials and other fact free claims for which it offers no data.

If there was no wage discrimination, Google could release all of its payroll and related data and silence all of its critics at once.

Google has chosen to not silence its critics with facts known only to Google.

Google needs help seeing the value of transparency to answer charges of wage discrimination.

Will you be the one that helps Google realize the value of transparency?

« Newer PostsOlder Posts »

Powered by WordPress