Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

April 30, 2018

Examining POTUS Executive Orders [Tweets < Executive Orders < Cern Data]

Filed under: Government Data,R,Text Mining,Texts — Patrick Durusau @ 8:12 pm

Examining POTUS Executive Orders by Bob Rudis.

From the post:

This week’s edition of Data is Plural had two really fun data sets. One is serious fun (the first comprehensive data set on U.S. evictions, and the other I knew about but had forgotten: The Federal Register Executive Order (EO) data set(s).

The EO data is also comprehensive as the summary JSON (or CSV) files have links to more metadata and even more links to the full-text in various formats.

What follows is a quick post to help bootstrap folks who may want to do some tidy text mining on this data. We’ll look at EOs-per-year (per-POTUS) and also take a look at the “top 5 ‘first words’” in the titles of the EOS (also by POTUS).

My estimate of the importance of executive orders by American Presidents, “Tweets < Executive Orders < Cern Data,” is only an approximation.

Rudis leaves you plenty of room to experiment with R and processing the text of executive orders.

Enjoy!

TrackML Particle Tracking Challenge [Non-Twitter Big Data]

Filed under: CERN,Physics — Patrick Durusau @ 7:40 pm

TrackML Particle Tracking Challenge

Cutting to the chase:

… can machine learning assist high energy physics in discovering and characterizing new particles?

Details follow:

To explore what our universe is made of, scientists at CERN are colliding protons, essentially recreating mini big bangs, and meticulously observing these collisions with intricate silicon detectors.

While orchestrating the collisions and observations is already a massive scientific accomplishment, analyzing the enormous amounts of data produced from the experiments is becoming an overwhelming challenge.

Event rates have already reached hundreds of millions of collisions per second, meaning physicists must sift through tens of petabytes of data per year. And, as the resolution of detectors improve, ever better software is needed for real-time pre-processing and filtering of the most promising events, producing even more data.

To help address this problem, a team of Machine Learning experts and physics scientists working at CERN (the world largest high energy physics laboratory), has partnered with Kaggle and prestigious sponsors to answer the question: can machine learning assist high energy physics in discovering and characterizing new particles?

Specifically, in this competition, you’re challenged to build an algorithm that quickly reconstructs particle tracks from 3D points left in the silicon detectors. This challenge consists of two phases:

  • The Accuracy phase will run on Kaggle from May to July 2018. Here we’ll be focusing on the highest score, irrespective of the evaluation time. This phase is an official IEEE WCCI competition (Rio de Janeiro, Jul 2018).
  • The Throughput phase will run on Codalab from July to October 2018. Participants will submit their software which is evaluated by the platform. Incentive is on the throughput (or speed) of the evaluation while reaching a good score. This phase is an official NIPS competition (Montreal, Dec 2018).

All the necessary information for the Accuracy phase is available here on Kaggle site. The overall TrackML challenge web site is there.

I know you breathed a sigh of relief upon reading, [Non-Twitter Big Data].

There’s nothing wrong with using Twitter to practice big data techniques but end of the day, at best some advertiser can micro-tweak an advertisement for a loser (pronounced “user.”) There’s no real bang from that “achievement.”

Unlike tweaking ad targeting, a viable solution to this challenge may make a fundamental difference in high energy physics.

Would you rather be known as an ad tweaker or for advancing ML in high energy physics?

Your call.

TeX Live 2018

Filed under: TeX/LaTeX — Patrick Durusau @ 6:44 pm

TeX Live 2018

TeX Live 2018 is available for installation over the Internet or as TeX Live on DVD! (Other methods are possible, see the webpage.)

I did some simple searches but no promising hits for a LaTeX query language.

Suggestions/pointers?

Not a concrete project but curious about identifying common practices across a corpus of LaTeX files.

LaTeX Coffee Stains

Filed under: Humor,TeX/LaTeX — Patrick Durusau @ 4:40 pm

LaTeX Coffee Stains by Hanno Rein.

Even if you have the latest TeX Live release installed, you may overlook a package or be missing something essential like LaTeX Coffee Stains.

From the webpage:

This package provides an essential feature to LaTeX that has been missing for too long. It adds a coffee stain to your documents. A lot of time can be saved by printing stains directly on the page rather than adding it manually.

Give your documents that “hard use” look. Impress managers.

To make the pages “sticky,” you have to supply your own donuts.

April 29, 2018

The Feminist Data Set Project

Filed under: Data Science,Feminism — Patrick Durusau @ 7:15 pm

This Designer Is Fighting Back Against Bad Data–With Feminism by Katharine Schwab.

From the post:


“Intersectionality,” declares one in all caps. “Men Explain Things to Me– Solnit,” another one reads, referencing a 2008 essay by the writer Rebecca Solnit. “Is there a feminist programming language?” asks another. “Buffy 4eva,” reads an orange Post-it Note, next to a blue note that proclaims, “Transwomen are women.”

These are all ideas for the themes and pieces of content that will inform the “Feminist Data Set”: a project to collect data about intersectional feminism in a feminist way. Most data is scraped from existing networks and websites or collected by surveilling people as they move through digital and physical space–as such, it reflects the biases these existing systems have. The Feminist Data Set, on the other hand, aspires to a more equitable goal: collaborative, ethical data collection.

Step one? Sinders asks everyone in the room to spend five minutes brainstorming ideologies (like femininity, virtue, and implicit bias) and specific pieces of content (like old maid, cyberfeminism, and Mary Shelley) for the data set on sticky notes. Then, the entire group organizes them into categories, from high-level ideological frameworks down to individual pieces of content. The exercise is a chance for a particular artistic community to have a say over what feminist data is, while participating in an open-source project that they’ll one day be able to use for their own purposes. Right now, the data set includes a gender-neutral dictionary, essays by Donna Haraway, and journalist Clare Evans’s new book, Broad Band, a female-centric history of computing.

If you know the work of Caroline Sinders, @carolinesinders, you are already following her. If you don’t, get to Twitter and follow her!

There are any number of aspects of Sinders’ work that are important but the “Feminist Data Set” foregrounds one that is often overlooked.

As you start to speak, the mere shifting your weight to enter a conversation, you are making decisions that will shape the data set that results from a group discussion.

No ill will or evil intent on your part, or anyone else’s, but the context that shapes our contributions, the other voices, prior suggestions, all shape the resulting view of “data.” Moreover, that shaping is unavoidable.

I see Sinder’s as pulling to the foreground what is often taken as “that’s the way it is.” No indeed, data is never the way it is. Data and data sets are the product of social agreements between people, people no more or less skilled than you.

This looks like deeply promising work and I look forward to hearing more about its progress.

Processing “Non-Hot Mike” Data (Audio Processing for Data Scientists)

Filed under: Ethics,Politics,Privacy,Speech Recognition — Patrick Durusau @ 6:32 pm

A “hot mike” is one that is transmitting your comments, whether you know the mike is activated or not.

For example, a “hot mike” in 2017 caught this jewel:

Israeli Prime Minister Benjamin Netanyahu called the European Union “crazy” at a private meeting with the leaders of four Central European countries, unaware that a microphone was transmitting his comments to reporters outside.

“The EU is the only association of countries in the world that conditions the relations with Israel, that produces technology and every area, on political conditions. The only ones! Nobody does it. It’s crazy. It’s actually crazy. There is no logic here,” Netanyahu said Wednesday in widely reported remarks.

Netanyahu was meeting with the leaders of Hungary, Slovakia, Czech Republic and Poland, known as the Visegrad Group.

The microphone was switched off after about 15 minutes, according to reports.

A common aspect of “hot mike” comments is the speaker knew the microphone was present, but assumed it was turned off. In “hot mike” cases, the speaker is known and the relevance of their comments usually obvious.

But what about “non-hot mike” comments? That is comments made by a speaker with no sign of a microphone?

Say casual conversation in a restaurant, at a party, in a taxi, in a conversation at home or work, or anywhere in between?

Laws governing the interception of conversations are vast and complex so before processing any conversation data you suspect to be intercepted, seek legal counsel. This post assumes you have been properly cautioned and chosen to proceed with processing conversation data.

Royal Jain, in Intro to audio processing world for a Data scientist, begins a series of posts to help bridge the gap between NLP and speech/audio processing. Jain writes:

Coming from NLP background I had difficulties in understanding the concepts of speech/audio processing even though a lot of underlying science and concepts were the same. This blog series is an attempt to make the transition easier for people having similar difficulties. The First part of this series describes the feature space which is used by most machine learning/deep learning models.

Looking forward to more posts in this series!

Data science ethics advocates will quickly point out that privacy concerns surround the interception of private conversations.

They’re right!

But when the privacy in question belows to those who plan, fund and execute regime-change wars, killing hundreds of thousands and making refugees out of millions more, generally increasing human misery on a global scale, I have an answer to the ethics question. My question is one of risk assessment.

You?

April 28, 2018

Getting Core Dumps on Linux – Writing Practice

Filed under: Linux OS,Programming — Patrick Durusau @ 4:26 pm

Julia Evans is an amazing programmer! Most people avoid core dumps and here Julia sets out to collect one. For use in debugging a segfault.

How to get a core dump for a segfault on Linux

From the post:

This week at work I spent all week trying to debug a segfault. I’d never done this before, and some of the basic things involved (get a core dump! find the line number that segfaulted!) took me a long time to figure out. So here’s a blog post explaining how to do those things!

At the end of this blog post, you should know how to go from “oh no my program is segfaulting and I have no idea what is happening” to “well I know what its stack / line number was when it segfaulted at at least!“.

You will learn a lot about core dumps and segfaults from this post but Julia’s post is also a great writing example.

Do you write up problems you have solved at work? Can your co-workers follow them and arrive at the same result (replication)? Every story you write and check with co-workers is a step towards improving your writing skills.

What did you learn this past week?

Mazes For Summer Vacation Trips!

Filed under: Humor,R — Patrick Durusau @ 3:52 pm

On any long vacation road trip, “spot the fascist,” a variation on spotting state license plates, keyed to bumper stickers, can get old. Not to mention it can become repetitious in some states and locales. Very repetitious.

If you convinced your significant other to permit a laptop on the trip, combine some R, maze generation and create entertainment for your fellow travelers!

Before leaving on your trip, check out the mazealls package.

To get you started, there are code illustrations for the following mazes: parallelogram, triangle, hexagon, dodecagon, trapezoid, rhombic dissections, Koch snowflake, Sierpinski triangle, Hexaflake, a dumb looking tree, hex spiral, a rectangle spiral, a double rectangle spiral, and a boustrophedon. The entries for “a dumb looking tree” and following illustrate the use of the pre-defined mazes as primitives.

From the webpage:

Generate mazes recursively via Turtle graphics.

Adjust the complexity and backgrounds of your mazes to be age-appropriate.

Useful even if you are not traveling. Say sitting through congressional hearings (present day) where members of Congress ask about AOL CDs. Working a maze under those circumstances may prevent you from getting dumber. No guarantees.

Two examples out of more than fifteen:

Parallelogram:

Composite of maze primitives:

Set an appropriate print scale to have any chance to solve these mazes!

Enjoy!

April 26, 2018

Largest Star Map Ever (1.7 billion stars)

Filed under: Astroinformatics,BigData — Patrick Durusau @ 4:16 pm

Incredible New View of the Milky Way Is the Largest Star Map Ever

From the webpage:

The European Space Agency’s Gaia spacecraft team has dropped its long-awaited trove of data about 1.7 billion stars. You can see a new visualization of all those stars in the Milky Way and nearby galaxies above, but you really need to zoom in to appreciate just how much stuff there is in the map. Yes, the specks are stars.

In addition to the 1.7 billion stars, this second release of data from Gaia contains the motions and color of 1.3 billion stars relative to the Sun, as well as how the stars relate to things in the distant background based on the Earth’s position. It also features radial velocities, amount of dust, and surface temperatures of lots of stars, and a catalogue of over 14,000 Solar System objects, including asteroids. There is a shitload of data in this release.

I won’t torment the images in the original post for display here.

More links:

Gaia Data Release 2: A Guide for Scientists

From the webpage:

“Gaia Data Release 2: A guide for scientists” consists of a series of videos from scientists to scientists. They are meant to give some overview of Gaia Data Release 2 and give help or guidance on the usage of the data. Also given are warnings and explanations on the limitations of the data. These videos are based on Skype interviews with the scientists.

Gaia Archive (here be big data)

Gaia is an ambitious mission to chart a three-dimensional map of our Galaxy, the Milky Way, in the process revealing the composition, formation and evolution of the Galaxy. Gaia will provide unprecedented positional and radial velocity measurements with the accuracies needed to produce a stereoscopic and kinematic census of about one billion stars in our Galaxy and throughout the Local Group. This amounts to about 1 per cent of the Galactic stellar population.

Astronomy, data science, computers, sounds like a natural for introducing all three to students!

Ethics and Law in AI ML

Filed under: Artificial Intelligence,Ethics,Machine Learning — Patrick Durusau @ 3:50 pm

Ethics and Law in AI ML

Data Science Renee, author of Women in Data Science (Twitter list with over 1400 members), has created a Twitter list focused on ethics and law in AI/ML.

When discussions of ethics for data scientists come up, remember that many players, corporations, governments, military organizations, spy agencies, abide by no code of ethics or laws. Adjust your ethics expectations accordingly.

April 25, 2018

Breaking Non-News: Twitter Has Echo Chambers (Co-Occupancy of Echo Chambers)

Filed under: Politics,Social Media,Social Networks — Patrick Durusau @ 9:55 am

Political Discourse on Social Media: Echo Chambers, Gatekeepers, and the Price of Bipartisanship by Kiran Garimella, Gianmarco De Francisci Morales, Aristides Gionis, Michael Mathioudakis.

Abstract:

Echo chambers, i.e., situations where one is exposed only to opinions that agree with their own, are an increasing concern for the political discourse in many democratic countries. This paper studies the phenomenon of political echo chambers on social media. We identify the two components in the phenomenon: the opinion that is shared (‘echo’), and the place that allows its exposure (‘chamber’ — the social network), and examine closely at how these two components interact. We define a production and consumption measure for social-media users, which captures the political leaning of the content shared and received by them. By comparing the two, we find that Twitter users are, to a large degree, exposed to political opinions that agree with their own. We also find that users who try to bridge the echo chambers, by sharing content with diverse leaning, have to pay a ‘price of bipartisanship’ in terms of their network centrality and content appreciation. In addition, we study the role of ‘gatekeepers’, users who consume content with diverse leaning but produce partisan content (with a single-sided leaning), in the formation of echo chambers. Finally, we apply these findings to the task of predicting partisans and gatekeepers from social and content features. While partisan users turn out relatively easy to identify, gatekeepers prove to be more challenging.

This is an interesting paper from a technical perspective, especially their findings on gatekeepers, but political echo chambers in Twitter is hardly surprising. Nor are political echo chambers new.

SourceWatch has a limited (time wise) history of echo chambers and attributes the creation of echo chambers to conservatives:

…conservatives pioneered the “echo chamber” technique,…

Amusing but I would not give conservatives that much credit.

Consider the echo chambers created by the Wall Street Journal (WSJ) versus the Guardian (formerly National Guardian, published in New York City), a Marxist publication, in the 1960’s.

Or the differing content read by pro verus anti-war activists in the same time period. Or racists versus pro-integration advocates. Or pro versus anti Row v. Wade 410 U.S. 113 (more) 93 S. Ct. 705, 35 L. Ed. 2d 147, 1973 U.S. LEXIS 159) supporters.

Echo chambers existed before the examples I have listed but those are sufficient to show echo chambers are not new, despite claims by those who missed secondary education history classes.

The charge of “echo chamber” by SourceWatch, for example, carries with it an assumption that information delivered via an “echo chamber” is false, harmful, etc., versus their information, which leads to the truth, light and the American way. (Substitute whatever false totems you have for “the American way.”)

I don’t doubt the sincerity SourceWatch. I doubt approaching others saying “…you need to crawl out from under your rock so I can enlighten you with the truth” leads to a reduction in echo chambers.

Becoming a gatekeeper, with a foot in two or more echo chambers won’t reduce the number of echo chambers either. But that does have the potential to have gateways between echo chambers.

You’ve tried beating on occupants of other echo chambers with little or no success. Why not try co-occupying their echo chambers for a while?

DoS of Censorship at YouTube?

Filed under: Censorship — Patrick Durusau @ 8:42 am

YouTube publishes deleted videos report (BBC)

From the post:

During the last quarter of 2017, 8.3 million videos were deleted for violating “community guidelines.”

If you don’t approve of censorship of others, consider recycling deleted videos, altered to produce a different key signature to increase the friction at YouTube for their deletion.

Think of overloading censors as a form of Denial-of-Service attack.

For example, if files known to be unacceptable were uploaded with altered key signatures and then reported by the uploader, from distributed accounts, it creates a loop of uploads and complaints. The overload helps protect other videos but also increases the friction of maintaining its reign of censorship.

Further automation is possible by training an AI to search the web for videos that human censors at YouTube would find to violate “community standards.”

If you fall into the “I want to be effective, not so much clever” category, scraping YouTube deletion notices, then searching for those files elsewhere, is an effective, albeit brain dead alternative for discovery of unacceptable files at YouTube.

Censorship of others is wrong.* (full stop)

Anyone who practices it, merits whatever fate befalls them.

* Note the limitation to “censorship of others.” I routinely “censor/filter” what I choose to view, discuss, etc. I leave everyone else with an identical freedom to choose what they wish to view, discuss, etc. Please extend the same courtesy to me. Thanks!

April 24, 2018

United Arab Emirates Imitates EU

Filed under: Bugs,Contest,Cybersecurity — Patrick Durusau @ 7:22 pm

Approach the webpage: Crowdfense Vulnerability Research Hub with caution! By posting:

Crowdfense budget for its first public Bug Bounty Program, launched April 2018, is $10 million USD.

Payouts for full-chain, previously unreported, exclusive capabilities range from $500,000 USD to $3 million USD per successful submission. Partial chains will be evaluated on a case-by-case basis and priced proportionally.

Within this program, Crowdfense evaluates only fully functional, top-quality 0-day exploits affecting the following platforms and products:

I have violated website restrictions, https://www.crowdfense.com/terms.html, clauses 1 and 4:

Subjecting me to:


Governing Law & Jurisdiction

These Terms will be governed by and interpreted in accordance with the laws of the United Arab Emirates, and you submit to the non-exclusive jurisdiction of the State and Federal Courts located in the Emirate of Abu Dhabi for the resolution of any disputes.

Sound absurd? It certainly is but no more absurd than the EU attempting to be the tail that wags the dog on issues of being forgotten or privacy.

The United Arab Emirates has as deep and storied a legal tradition as the EU but neither should be applied to actors outside their geographic borders.

If I am hosting content in the EU or United Arab Emirates, I am rightly subject to their laws. On other hand, if I am hosting content or consuming content outside their geographic boundaries, it’s absurd to apply their laws to me.

If the EU or United Arab Emirates wish to regulate (read censor) Internet traffic within their borders, it’s wrong-headed but their choice. But they should not be allowed to make decisions about Internet traffic for other countries.

BaseX 9.0.1 (tool maintenance)

Filed under: BaseX,XML,XQuery — Patrick Durusau @ 3:17 pm

BaseX 9.0.1 (Maintenance Release):

Welcome to our BaseX 9.0.1 maintenance release. An update is highly recommended: The major release had a critical bug, regarding the storage of short non-ASCII Unicode strings.

This is the changelog:

Critical Bug Fixes

  • Storage: Short strings with extended Unicode characters fixed
  • XQuery: Nested path optimizations reenabled (e.g. in functions)
  • XQuery: map:merge, size computation fixed
  • XQuery: node ordering across multiple database instances fixed

Improvements

  • GUI: Better Java 9 support (DPI scaling, font rendering)
  • XQuery, collections: faster document root tests
  • New R client. Thanks Ben Engbers!
  • Linux: exec command used in startup scripts

Minor Bug Fixes

  • XQuery: Allow interruption of tail-call function calls
  • XQuery, HTTP parsing of content-type parameters
  • XQuery, restrict rewriting of filter to path expression
  • GUI: progress feedback when creating databases via double-click

If you want to interfere with, influence, change the outcome of, any of the US 2018 mid-term elections and/or the 2020 Presidential election, you need the latest and greatest in tools, as well as skill at using them.

Upgrade to BaseX 9.0.1 today!

How-To Interfere with 2018 Mid-Term Elections (US) (Duping Journalists)

Filed under: Journalism,News,Reporting — Patrick Durusau @ 10:26 am

Patrick Butler summarizes the tricks used by trolls to dupe journalists in How journalists can avoid being manipulated by trolls seeking to spread disinformation.

The reported techniques are:

  1. Capturing the narrative
  2. Disguising a forgery as a leak
  3. News spamming
  4. Keyword squatting

See Butler’s post and the links therein for more details.

If that sounds difficult, consider what Wikileaks did with the Podesta Emails.

A true leak of accurate copies of emails, the drib, drib, drab, release cycle by Wikileaks kept low-grade office gossip at the center of media attention.

I have long complained about the Wikileaks strategy of dragging out leaks but media attention to the Podesta emails proves me wrong, if your goal is to keep media attention on trivial content.

I’m not unsympathetic to journalists who want to fight “misinformation.” In a perfect world we would all fight “misinformation.” But so long as “misinformation” and efforts against it advance Western government approved views, eschewing “misinformation” looks like a bad plan.

Western Press Definition of “Disinformation”

Filed under: Journalism,Politics — Patrick Durusau @ 9:09 am

The debunking of claims of a chemical weapons attack in Syria by award winning journalist Robert Fisk, is the most fresh evidence “disinformation” is entirely a matter of perspective.

All the major media channels dutifully repeated the false/no-evidence claims of gas attacks in Syria. The same media channels that decry “disinformation,” and call for censoring for non-traditional news sources.

“Disinformation,” means claims, truthful or not, unsanctioned by a Western government.*

To counter attempts to discount the reporting by Fisk, consider the recitation of awards over his career by Wikipedia:


Fisk has received the British Press Awards’ International Journalist of the Year seven times, and twice won its “Reporter of the Year” award. He also received Amnesty International UK Media Awards in 1992 for his report “The Other Side of the Hostage Saga”, in 1998 for his reports from Algeria and again in 2000 for his articles on the NATO air campaign against the FRY in 1999.

  • 1984 Lancaster University honorary degree
  • 1991 Jacob’s Award for coverage of the Gulf War on RTÉ Radio 1
  • 1999 Orwell Prize for journalism
  • 2001 David Watt Prize for an investigation of the 1915 Armenian Genocide by the Ottoman Empire
  • 2002 Martha Gellhorn Prize for Journalism
  • 2003 Open University honorary doctorate
  • 2004 University of St Andrews honorary degree
  • 2004 Carleton University honorary degree
  • 2005 Adelaide University Edward Said Memorial lecture
  • 2006 Ghent University honorary degree Political and Social Sciences
  • 2006 American University of Beirut honorary degree
  • 2006 Queen’s University Belfast honorary degree
  • 2006 Lannan Cultural Freedom Prize worth $350,000
  • 2008 University of Kent honorary degree
  • 2008 Trinity College Dublin honorary doctorate
  • 2009 College Historical Society’s Gold Medal for Outstanding Contribution to Public Discourse
  • 2009 Liverpool Hope University honorary degree
  • 2011 International Prize at the Amalfi Coast Media Awards, Italy

*I don’t automatically credit claims by non-Western governments. There is no evidence to show non-Western governments are more prone to lie than Western governments. All governments, Western and non-Western lie.

Anyone worthy of the title “journalist” who doesn’t start with that premise has already betrayed the reading public.

April 11, 2018

Lagging on Balisage Paper? PsyOps Advice

Filed under: Conferences,Persuasion — Patrick Durusau @ 3:19 pm

Are you lagging on your Balisage paper for submission on 22 April 2018?

Robert Cialdini, Pre-Suasion, at pages 116-120, details how to build a persuasive geography that focuses you on a Balisage submission.

The “trick” is to create geography/surroundings that remind you of Balisage, along with the great people, papers and conversations.

How?

Go to the Balisage Social Pages to grab pics of people, social gatherings, etc., from Balisage and make them into screen savers, posters, make your work space a mini-Balisage den.

Try it! You will be thinking about and working on your Balisage paper nearly constantly!

PS: The same trick works for other things but I would reserve it for Balisage papers. 😉

April 5, 2018

The EFF’s BFF? – Government

Filed under: Electronic Frontier Foundation,Government — Patrick Durusau @ 7:42 pm

DHS Confirms Presence of Cell-site Simulators in U.S. Capital by Cooper Quintin.

The present situation:

The Department of Homeland Security has finally confirmed what many security specialists have suspected for years: cell-phone tracking technology known as cell-site simulators (CSS) are being operated by potentially malicious actors in our nation’s capital.

Anyone with the skill level of a hobbyist can now build their own passive IMSI catcher for as little as $7 or an active cell-site simulator for around $1000. Moreover, mobile surveillance vendors have displayed a willingness to sell their goods to countries who can afford their technology, regardless of their human rights records.

The EFF’s solution:


Law enforcement and the intelligence community would surely agree that these technologies are dangerous in the wrong hands, but there is no way to stop criminals and terrorists from using these technologies without also closing the same security flaws that law enforcement uses. Unlike criminals however, law enforcement can still obtain search warrants and work directly with the phone companies to get subscribers’ location, so they would not lose any capabilities if the vulnerabilities CSSs rely on were fixed.

Why the EFF trusts a government that has spied on the American people for decades is a question you need to put to the EFF. I can’t think of any sensible explanation for their position.

I’ve been meaning to ask: How does it feel to be lumped in with “…criminals and terrorists…?”

You may be an average citizen who is curious about who your member of Congress or state/local government is sleeping with, being paid off by, or other normal and customary functions of government.

A CSS device can contribute towards meaningful government transparency. Perhaps that’s why the EFF resists CSS devices being in the hands of citizens.

We’ll lose our dependence on the EFF for what minimal transparency does exist.

I can live with that.

I am the very model of a hacker individual…

Filed under: Hacking — Patrick Durusau @ 3:23 pm

Pure brilliance posted to Twitter by Karen Reilly, @akareilly:

I am the very model of a hacker individual,
I’ve information cryptographic, analog and digital,
I know every cypherpunk, adhere to Kerckhoff’s principle,
I bounce from node to node so I can make myself invisible.

I’m very well acquainted, too, with server vulnerability,
I escalate my privilege and I trash availability,
I know the latest breaches and I know first when the ‘net’s ablaze,
With many cheerful facts about developments in zero days.

I’m very good at cracking but I can support security;
I know that it is bollocks if you seek it with obscurity :
In short, in matters cryptographic, analog and digital,
I am the very model of a hacker individual.

I know our hacker history from Ada to the Admiral,
If I ever leave a trace at most it is ephemeral,
I clone your black box hardware tokens or I social engineer
I fill logfiles with peculiarities that cause CTO fear

I can open any doors with tumbler locks or RFID
I've got root and have the keys to all your cryptocurrency
I can hum your servers dead by reaching a high decibel
No matter where I am, I am guaranteed to pop a shell

In short, in matters cryptographic, analog and digital,
I am the very model of a hacker individual.

I have seen other verses but not certain of their placement. Perhaps that’s intentional.

In any event, this is the first version I saw on Twitter. Other arrangements and content are likely to exist and be equally enjoyable.

April 4, 2018

Glossary of Defense Acquisition Acronyms and Terms

Filed under: Vocabularies — Patrick Durusau @ 7:55 pm

Glossary of Defense Acquisition Acronyms and Terms

Not nearly all that you will need for the acronyms and terms even for defense work in the United States, but certainly a good starter set.

From the webpage:

Department of Defense, Defense Acquisition University (DAU), Foundational Learning Directorate, Center for Acquisition and Program Management, Fort Belvoir, Virginia

The DAU Glossary reflects most acronyms, abbreviations, and terms commonly used in the systems acquisition process within the Department of Defense (DoD) and defense industries. It focuses on terms with generic DoD application but also includes some Service-unique terms. It has been extensively revised to reflect current acquisition initiatives and policies. While the glossary identifies and highlights many terms, it is not all-inclusive, particularly regarding the military Services, defense agencies and other organizationally unique terms. The Glossary contains a listing of commmon abbreviations, acronyms and definitions of terms used throughout the DoD acquisition community, including terms that have commonality beteween U.S. and Allied acquisition programs. The Glossary is for use by students of DAU, and other working on defense acquisition matters, including congressional staffs, Pentagon and other headquarters (HQ) staffs, program managers and requirements managers of the DoD, and defense contractors.

DISCLAIMER

The Glossary of Defense Acquisition Acronyms and Terms provides an extensive list of acronyms, abbreviations and terms commonly used in the systems acquisition process within the DoD and defense industries. Many of the terms in the Glossary may be defined in other documents in a different fashion. For example, the Federal Acquisition Regulation (FAR) contains upwards of 600 definitions of words and terms. Definitions that are applicable to all parts of the FAR are contained in FAR Part 2, Definitions of Words and Terms, whcih contains close to 250 definitions.

Other words and terms may be defined for a particular part, subpart or section. Some terms, such as “United States”, have multiple definitions. “United States” is defined 11 different ways in the FAR, due to how it is defined in various pieces of legislation. Some of those definitions differ from the ones contained in the Glossary.

The reader may want to use definitions that are provided in the Glossary in solicitations and resulting contracts to help clarify the government’s requirement. In doing so, keep in mind the FAR requires that all solicitations and contracts excceeding the simplified acquisition threshold incorporate the definitions in FAR 2.101 Definitions.

See FAR 52.202-1, Definitions, for appropriate clause.

Take heed of the topic map like warning that other definitions of these terms exist!

192 Search Strings for Never To Be Patched Intel CPUs

Filed under: Cybersecurity,Security — Patrick Durusau @ 7:42 pm

Mohit Kumar in Intel Admits It Won’t Be Possible to Fix Spectre (V2) Flaw in Some Processors points to a microcode revision guide from Intel, PDF), which points to CPUs which won’t be patched for Spectre (variant 2) flaws.

Kumar lists the product families, Bloomfield, Clarksfield, Gulftown, Harpertown Xeon, Jasper Forest, Penryn, SoFIA 3GR, Wolfdale, and Yorkfield, but those are Intel names, not product names.

To simplify your searching for never-to-be-patched Intel chips, I created a list of the public chips names, some 192 of them.

Good hunting!

Powered by WordPress