Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

April 17, 2016

HackBack! A DIY Guide [Attn: Everybody A Programmer Advocates]

Filed under: Cybersecurity,Security — Patrick Durusau @ 3:21 pm

HackBack! A DIY Guide

hack-back

From the introduction:

You’ll notice the change in language since the last edition [1]. The English-speaking world already has tons of books, talks, guides, and info about hacking. In that world, there’s plenty of hackers better than me, but they misuse their talents working for “defense” contractors, for intelligence agencies, to protect banks and corporations, and to defend the status quo. Hacker culture was born in the US as a counterculture, but that origin only remains in its aesthetics – the rest has been assimilated. At least they can wear a t-shirt, dye their hair blue, use their hacker names, and feel like rebels while they work for the Man.

You used to have to sneak into offices to leak documents [2]. You used to need a gun to rob a bank. Now you can do both from bed with a laptop in hand [3][4]. Like the CNT said after the Gamma Group hack: “Let’s take a step forward with new forms of struggle” [5]. Hacking is a powerful tool, let’s learn and fight!

[1] http://pastebin.com/raw.php?i=cRYvK4jb
[2] https://en.wikipedia.org/wiki/Citizens%27_Commission_to_ Investigate_the_FBI
[3] http://www.aljazeera.com/news/2015/09/algerian-hacker-hero-hoodlum-150921083914167.html
[4] https://securelist.com/files/2015/02/Carbanak_APT_eng.pdf
[5] http://madrid.cnt.es/noticia/consideraciones-sobre-el-ataque-informatico-a-gamma-group

I thought the shout out to hard working Russian hackers was a nice touch!

If you are serious about your enterprise security, task one of your better inforsec people to use HackBack! A DIY Guide as a starting point against your own network.

Think of it as a realistic test of your network security.

For “everybody a programmer” advocates, consider setting up networks booting off read-only media and specific combinations of vulnerable software to encourage practice hacking of those systems.

Think of hacking “practice” systems as validation of hacking skills. Not to mention being great training for future infosec types.

PS: Check computer surplus if you want to duplicate some current government IT systems.

I first saw this in FinFisher’s Account of How He Broke Into Hacking Team Servers by Catalin Cimpanu.

UNIX, Bi-Grams, Tri-Grams, and Topic Modeling

UNIX, Bi-Grams, Tri-Grams, and Topic Modeling by Greg Brown.

From the post:

I’ve built up a list of UNIX commands over the years for doing basic text analysis on written language. I’ve built this list from a number of sources (Jim Martin‘s NLP class, StackOverflow, web searches), but haven’t seen it much in one place. With these commands I can analyze everything from log files to user poll responses.

Mostly this just comes down to how cool UNIX commands are (which you probably already know). But the magic is how you mix them together. Hopefully you find these recipes useful. I’m always looking for more so please drop into the comments to tell me what I’m missing.

For all of these examples I assume that you are analyzing a series of user responses with one response per line in a single file: data.txt. With a few cut and paste commands I often apply the same methods to CSV files and log files.

My favorite comment on this post was a reader who extended the tri-gram generator to build a hexagram!

If that sounds unreasonable, you haven’t read very many government reports. 😉

While you are at Greg’s blog, notice a number of useful posts on Elasticsearch.

April 16, 2016

UC Davis Spent $175,000.00 To Suppress This Image (let’s disappoint them)

Filed under: Politics,Searching — Patrick Durusau @ 8:19 pm

uc-davis-pic

If you have a few minutes, could you repost this image to your blog and/or Facebook page?

Some references you may want to cite include:

Pepper-sprayed students outraged as UC Davis tried to scrub incident from web by Anita Chabria

Calls for UC Davis chancellor’s ouster grow amid Internet scrubbing controversy by Sarah Parvini and Ruben Vives.

UC Davis Chancellor Faces Calls To Resign Over Pepper Spray Incident (NPR)

Katehi’s effort to alter search engine results backfires spectacularly

UC Davis’ damage control: Dumb-de-dumb-dumb

Reposting the image and links to the posts cited above will help disappoint the mislaid plans to suppress it.

What is more amazing than the chancellor thinking information on the Internet can be suppressed, at least for a paltry $175K, is this pattern will be repeated year after year.

Lying about information known to others is a losing strategy, always.

But that strategy will be picked up by other universities, governments and their agencies, corporations, to say nothing of individuals.

Had UC Davis spent that $175K on better training for its police officers, people would still talk about this event but it would be in contrast to the new and improved way US Davis deals with protesters.

That’s not likely to happen now.

Hello World – Machine Learning Recipes #1

Filed under: Machine Learning,Python,Scikit-Learn,TensorFlow — Patrick Durusau @ 7:43 pm

Hello World – Machine Learning Recipes #1 by Josh Gordon.

From the description:

Six lines of Python is all it takes to write your first machine learning program! In this episode, we’ll briefly introduce what machine learning is and why it’s important. Then, we’ll follow a recipe for supervised learning (a technique to create a classifier from examples) and code it up.

The first in a promised series on machine learning using scikit learn and TensorFlow.

The quality of video that you wish was available to intermediate and advanced treatments.

Quite a treat! Pass onto anyone interested in machine learning.

Enjoy!

April 15, 2016

AstroImageJ – ImageJ for Astronomy

Filed under: Astroinformatics — Patrick Durusau @ 9:23 pm

AstroImageJ – ImageJ for Astronomy

From the webpage:

AstroImageJ (AIJ)

  • Runs on Linux, Windows and Mac OS
  • Provides an interactive interface similar to ds9
  • Reads and writes FITS images with standard headers
  • Allows FITS header viewing and editing
  • Plate solves and adds WCS to images seamlessly using the Astrometry.net web interface
  • Displays astronomical coordinates for images with WCS
  • Provides object identification via an embedded SIMBAD interface
  • Aligns image sequences using WCS headers or by using apertures to correlate stars
  • Image calibration including bias, dark, flat, and non-linearity correction with option to run in real-time
  • Interactive time-series differential photometry interface with option to run in real-time
  • Allows comparison star ensemble changes without re-running differential photometry
  • Provides an interactive multi-curve plotting tool streamlined for plotting light curves
  • Includes an interactive light curve fitting interface with simultaneous detrending
  • Allows non-destructive object annotations/labels using FITS header keywords
  • Provides a time and coordinate converter tool with capability to update/enhance FITS header content (AIRMASS, BJD, etc.)
  • Exports analyses formatted as spreadsheets
  • Creates color images and with native ImageJ processing power
  • Optionally enter reference star apparent magnitudes to calculate target star magnitudes automatically
  • Optionally create Minor Planet Center (MPC) format for direct submission of data to the MPC

aij_image_display_w400

aij_light_curve_w400

When the noise from social media gets too shrill….

Enjoy!

April 14, 2016

Clojure Code Sample Appears to VBA team

Filed under: Art,Humor,Programming — Patrick Durusau @ 8:17 pm

clojure-code-sample-vba-team

The caption as reported at: Classic Programmer Paintings reads:

“Consultant shows Clojure code sample to VBA team”, Rembrandt, Oil on canvas, 1635

Whether shown by a consultant or being written on the wall by a disembodied hand, I suspect the impact would be the same. 😉

There is a Bosch triplet at Classic Programmer Paintings.

I was about to lament the lack of high-resolution Bosch images but then discovered Extraordinary Interactive Hi-Res Exhibit of Bosch’s ‘Garden of Earthly Delights’ by Christopher Jobson.

As Jobson comments:

This is the internet we were promised.

Enjoy!

First Draft – Observational Challenge

Filed under: Journalism,News,Reporting,Verification — Patrick Durusau @ 7:40 pm

First Draft – Observational Challenge by Jenni Sargent.

From the webpage:

Can you identify these locations based on the visual clues in the pictures? Think about street signs, the language of shop signs, the landscape and architecture to get an idea of where it might be.

To learn more about identifying locations, see this article on ‘Piecing together visual clues for verification‘.

From the challenge:

Each photograph in this challenge contains visual clues to help you identify where it was taken.

You have four chances to get each one right.

Helpful hints appear with every wrong answer but you will lose 1 point with each attempt.

Sorry! No spoilers!

I’m surprised that someone doesn’t have a daily photo-id twitter stream.

They may, if you run across one ping me!

Enjoy!

Visualizing Data Loss From Search

Filed under: Entity Resolution,Marketing,Record Linkage,Searching,Topic Maps,Visualization — Patrick Durusau @ 3:46 pm

I used searches for “duplicate detection” (3,854) and “coreference resolution” (3290) in “Ironically, Entity Resolution has many duplicate names” [Data Loss] to illustrate potential data loss in searches.

Here is a rough visualization of the information loss if you use only one of those terms:

duplicate-v-coreference-500-clipped

If you search for “duplicate detection,” you miss all the articles shaded in blue.

If you search for “coreference resolution,” you miss all the articles shaded in yellow.

Suggestions for improving this visualization?

It is a visualization that could be performed on client’s data, using their search engine/database.

In order to identify the data loss they are suffering now from search across departments.

With the caveat that not all data loss is bad and/or worth avoiding.

Imaginary example (so far): What if you could demonstrate no overlapping of terminology for two vendors for the United States Army and the Air Force. That is no query terms for one returned useful results for the other.

That is a starting point for evaluating the use of topic maps.

While the divergence in terminologies is a given, the next question is: What is the downside to that divergence? What capability is lost due to that divergence?

Assuming you can identify such a capacity, the next question is to evaluate the cost of reducing and/or eliminating that divergence versus the claimed benefit.

I assume the most relevant terms are going to be those internal to customers and/or potential customers.

Interest in working this up into a client prospecting/topic map marketing tool?


Separately I want to note my discovery (you probably already knew about it) of VennDIS: a JavaFX-based Venn and Euler diagram software to generate publication quality figures. Download here. (Apologies, the publication itself if firewalled.)

The export defaults to 800 x 800 resolution. If you need something smaller, edit the resulting image in Gimp.

It’s a testimony to the software that I was able to produce a useful image in less than a day. Kudos to the software!

April 13, 2016

“Ironically, Entity Resolution has many duplicate names” [Data Loss]

Filed under: Entity Resolution,Topic Maps — Patrick Durusau @ 8:45 pm

Nancy Baym tweeted:

“Ironically, Entity Resolution has many duplicate names” – Lise Getoor

entity-resolution

I can’t think of any subject that doesn’t have duplicate names.

Can you?

In a “search driven” environment, not knowing the “duplicate” names for a subject means data loss.

Data loss that could include “smoking gun” data.

Topic mappers have been making that pitch for decades but it never has really caught fire.

I don’t think anyone doubts that data loss occurs, but the gravity of that data loss remains elusive.

For example, let’s take three duplicate names for entity resolution from the slide, duplicate detection, reference reconciliation, coreference resolution.

Supplying all three as quoted strings to CiteSeerX, any guesses on the number of “hits” returned?

As of April 13, 2016:

  • duplicate detection – 3,854
  • reference reconciliation – 253
  • coreference resolution – 3,290

When running the query "duplicate detection" "coreference resolution", only 76 “hits” are returned, meaning that there are only 76 overlapping cases reported in the total of 7,144 for both of those terms separately.

That’s assuming CiteSeerX isn’t shorting me on results due to server load, etc. I would have to cross-check the data itself before I would swear to those figures.

But consider just the raw numbers I report today: duplicate detection – 3,854, coreference resolution – 3,290, with 76 overlapping cases.

That’s two distinct lines of research on the same problem, for the most part, ignoring the other.

What do you think the odds are of duplication of techniques, experiences, etc., spread out over those 7,144 articles?

Instead of you or your client duplicating a known-to-somebody solution, you could be building an enhanced solution.

Well, except for the data loss due to “duplicate names” in a search environment.

And that you would have to re-read all the articles in order to find which technique or advancement was made in each article.

Multiply that by everyone who is interested in a subject and its a non-trivial amount of effort.

How would you like to avoid data loss and duplication of effort?

April 12, 2016

Anonymous Chat Service

Filed under: Cybersecurity,Encryption,Government,Privacy,Security,Tor — Patrick Durusau @ 7:43 pm

From the description:

The continued effort of governments around the globe to censor our seven sovereign seas has not gone unnoticed. This is why we, once again, raise our Anonymous battle flags to expose their corruption and disrupt their surveillance operations. We are proud to present our new chat service residing within the remote island coves of the deep dark web. The OnionIRC network is designed to allow for full anonymity and we welcome any and all to use it as a hub for anonymous operations, general free speech use, or any project or group concerned about privacy and security looking to build a strong community. We also intend to strengthen our ranks and arm the current and coming generations of internet activists with education. Our plan is to provide virtual classrooms where, on a scheduled basis, ‘teachers’ can give lessons on any number of subjects. This includes, but is not limited to: security culture, various hacking/technical tutorials, history lessons, and promoting how to properly utilize encryption and anonymity software. As always, we do not wish for anyone to rely on our signal alone. As such, we will also be generating comprehensible documentation and instructions on how to create your own Tor hidden-service chat network in order to keep the movement decentralized. Hackers, activists, artists and internet citizens, join us in a collective effort to defend the internet and our privacy.

Come aboard or walk the plank.

We are Anonymous,
we’ve been expecting you.

Protip: This is not a website, it’s an IRC chat server. You must use an IRC chat client to connect. You cannot connect simply through a browser.

Some popular IRC clients are: irssi, weechat, hexchat, mIRC, & many more https://en.wikipedia.org/wiki/Compari…

Here is an example guide for connecting with Hexchat: https://ghostbin.com/paste/uq7bt/raw

To access our IRC network you must be connecting through the Tor network! https://www.torproject.org/

Either download the Tor browser or install the Tor daemon, then configure your IRC client’s proxy settings to pass through Tor or ‘torify’ your client depending on your setup.

If you are connecting to Tor with the Tor browser, keep in mind that the Tor browser must be open & running for you to pass your IRC client through Tor.

How you configure your client to pass through Tor will vary depending on the client.
Hostname: onionirchubx5363.onion

Port: 6667 No SSL, but don’t worry! Tor connections to hidden-services are end-to-end encrypted already! Thank you based hidden-service gods!

In the near future we will be releasing some more extensive client-specific guides and how-to properly setup Tor for transparent proxying (https://trac.torproject.org/projects/…) & best use cases.

This is excellent news!

With more good news promised in the near future (watch the video).

Go dark, go very dark!

LSTMetallica:…

Filed under: Machine Learning,Music — Patrick Durusau @ 7:30 pm

LSTMetallica: Generation drum tracks by learning the drum tracks of 60 Metallica songs by Keunwoo Choi.

From the post:

Word-RNN (LSTM) on Keras with wordified text representations of Metallica’s drumming midi files, which came from midiatabase.com.

  • Midi files of Metallica track comes from midiatabase.com.
  • LSTM model comes from Keras.
  • Read Midi files with python-midi.
  • Convert them to a text file (corpus) by my rules, which are
    • (Temporal) Quantisation
    • Simplification/Omitting some notes
    • ‘Word’ with binary numbers
  • Learn an LSTM model with the corpus and generate by prediction of words.
  • Words in a text file → midi according to the rules I used above.
  • Listen!

I mention this in part to inject some variety into the machine learning resources I have mentioned.

The failures of machine learning for recommendations can be amusing. For the most part when it works, they are rather dull.

Learning from drum tracks has the potential to combine drum tracks from different groups, resulting is something new.

May be fit for listening, maybe not. You won’t know without trying it.

Enjoy!

History Unfolded: US Newspapers and the Holocaust [Editors/Asst. Editors?]

Filed under: Crowd Sourcing,History,News — Patrick Durusau @ 6:34 pm

History Unfolded: US Newspapers and the Holocaust

From the webpage:

What did American newspapers report about the Holocaust during World War II? Citizen historians participating in History Unfolded: US Newspapers and the Holocaust will help the US Holocaust Memorial Museum answer this question.

Your Role

Participants will explore their local newspapers for articles about the Holocaust, and submit their research into a centralized database. The collected data will show trends in American reporting.

Citizen historians like you will explore Holocaust history as both an American story and a local story, learn how to use primary sources in historical research, and challenge assumptions about American knowledge of and responses to the Holocaust.

Project Outcomes

Data from History Unfolded: U.S. Newspapers and the Holocaust will be used for two main purposes:
to inform the Museum’s upcoming exhibition on Americans and the Holocaust, and to enhance scholarly research about the American press and the Holocaust.

Our Questions

  • What did people in your community know about the event?
  • Was the information accurate?
  • What do the newspapers tell us about how local and national leaders and community members reacted to news about the event?

Historical Background

During the 1930s, a deeply rooted isolationism pervaded American public opinion. Americans were scornful of Europe’s inability to organize its affairs following the destruction of WWI and feared being drawn into European matters. As a result, news about the Holocaust arrived in an America fraught with isolation, cynicism, and fear of being deceived by government propaganda. Even so, the way the press told the story of the Holocaust—the space allocated, the location of the news in the paper, and the editorial opinions—shaped American reactions.

U.S. Press Coverage of the Holocaust

The press has influence on public opinion. Media attention enhances the importance of an issue in the eyes of the public. The U.S. press had reported on Nazi violence against Jews in Germany as early as 1933. It covered extensively the Nuremberg Laws of 1935 and the expanded German antisemitic legislation of 1938 and 1939. The nationwide state-sponsored violence of November 9-10, 1938, known as Kristallnacht, made front page news in dailies across the U.S.

As the magnitude of anti-Jewish violence increased in 1939-1941, many American newspapers ran descriptions of German shooting operations, first in Poland and later after the invasion of the Soviet Union. As early as July 2, 1942, the New York Times reported on the operations of the killing center in Chelmno, based on sources from the Polish underground. The article, however, appeared on page six of the newspaper.

During the Holocaust, the American press did not always publicize reports of Nazi atrocities in full or with prominent placement. For example, the New York Times, the nation’s leading newspaper, generally deemphasized the murder of the Jews in its news coverage. Although the Times covered the December 1942 statement of the Allies condemning the mass murder of European Jews on its front page, it placed coverage of the more specific information released on page ten, significantly minimizing its importance. Similarly, on July 3, 1944, the Times provided on page 3 a list by country of the number of Jews “eradicated”; the Los Angeles Times places the report on page 5.

How did your hometown cover these events?

I first saw this in What did Americans know as the Holocaust unfolded? Quite a lot, it turns out. by Tara Bahrampour, follow @TaraBahrampour.

I have registered for the project and noticed that although author bylines are captured, there doesn’t seem to be a routine to capture editors, assistant editors, etc. Newspapers don’t assemble themselves.

The site focuses on twenty (20) major events, starting with “Dachau Opens,” March 22, 1933 and ending with “FDR Delivers His Forth Inaugural Address,” January 20, 1945.

The interfaces seem very intuitive and I am looking forward to searching my local newspaper for one or more of these events.

PS: Anti-Semites didn’t and don’t exist in isolation. Graphing relationships over history in your community may help explain some of the news coverage you do or don’t find.

Planet TinkerPop [+ 2 New Graph Journals]

Filed under: Graphs,Gremlin,TinkerGraph,TinkerPop — Patrick Durusau @ 3:44 pm

Planet TinkerPop

From the webpage:

Planet TinkerPop is a vendor-agnostic, community-driven site aimed at advancing graph technology in general and Apache TinkerPop™ in particular. Graph technology is used to manage, query, and analyze complex information topologies composed of numerous heterogenous relationships and is currently benefiting companies such as Amazon, Google, and Facebook. For all companies to ultimately adopt graph technology, vendor-agnostic graph standards and graph knowledge must be promulgated. For the former, TinkerPop serves as an Apache Software Foundation governed community that develops a standard graph data model (the property graph) and query language (Gremlin). Apache TinkerPop is a widely supported graph computing framework that has been adopted by leading graph system vendors and interfaced with by numerous graph-based applications across various industries. For educating the public on graphs, Planet TinkerPop’s Technology journal publishes articles about TinkerPop-related graph research and development. The Use Cases journal promotes articles on the industrial use of graphs and TinkerPop. The articles are contributed by members of the Apache TinkerPop community and additional contributions are welcomed and strongly encouraged. We hope you enjoy your time learning about graphs here at Planet TinkerPop.

If you are reading about Planet TinkerPop I can skip the usual “graphs are…” introductory comments. 😉

Planet TinkerPop is a welcome addition to the online resources on graphs in general and TinkerPop in particular.

So they aren’t buried in the prose, let me highlight two new journals at Planet TinkerPop:

TinkerPop Technology journal  publishes articles about TinkerPop-related graph research and development.

TinkerPop Use Cases journal  promotes articles on the industrial use of graphs and TinkerPop.

Both are awaiting your contributions!

Enjoy!

PS: I prepended “TinkerPop” to the journal names and suggest an ISSN (http://loc.gov/issn/form/ would be appropriate for both journals.

Coeffects: Context-aware programming languages – Subject Identity As Type Checking?

Filed under: Subject Identity,TMRM,Topic Maps,Types — Patrick Durusau @ 3:14 pm

Coeffects: Context-aware programming languages by Tomas Petricek.

From the webpage:

Coeffects are Tomas Petricek‘s PhD research project. They are a programming language abstraction for understanding how programs access the context or environment in which they execute.

The context may be resources on your mobile phone (battery, GPS location or a network printer), IoT devices in a physical neighborhood or historical stock prices. By understanding the neighborhood or history, a context-aware programming language can catch bugs earlier and run more efficiently.

This page is an interactive tutorial that shows a prototype implementation of coeffects in a browser. You can play with two simple context-aware languages, see how the type checking works and how context-aware programs run.

This page is also an experiment in presenting programming language research. It is a live environment where you can play with the theory using the power of new media, rather than staring at a dead pieces of wood (although we have those too).

(break from summary)

Programming languages evolve to reflect the changes in the computing ecosystem. The next big challenge for programming language designers is building languages that understand the context in which programs run.

This challenge is not easy to see. We are so used to working with context using the current cumbersome methods that we do not even see that there is an issue. We also do not realize that many programming features related to context can be captured by a simple unified abstraction. This is what coeffects do!

What if we extend the idea of context to include the context within which words appear?

For example, writing a police report, the following sentence appeared:

There were 20 or more <proxy string=”black” pos=”noun” synonym=”African” type=”race”/>s in the group.

For display purposes, the string value “black” appears in the sentence:

There were 20 or more blacks in the group.

But a search for the color “black” would not return that report because the type = color does not match type = race.

On the other hand, if I searched for African-American, that report would show up because “black” with type = race is recognized as a synonym for people of African extraction.

Inline proxies are the easiest to illustrate but that is only one way to serialize such a result.

If done in an authoring interface, such an approach would have the distinct advantage of offering the original author the choice of subject properties.

The advantage of involving the original author is that they have an interest in and awareness of the document in question. Quite unlike automated processes that later attempt annotation by rote.

An Introduction to Threat Modeling

Filed under: Cybersecurity,Government,Privacy — Patrick Durusau @ 2:09 pm

An Introduction to Threat Modeling

From the post:

There is no single solution for keeping yourself safe online. Digital security isn’t about which tools you use; rather, it’s about understanding the threats you face and how you can counter those threats. To become more secure, you must determine what you need to protect, and whom you need to protect it from. Threats can change depending on where you’re located, what you’re doing, and whom you’re working with. Therefore, in order to determine what solutions will be best for you, you should conduct a threat modeling assessment.

The five questions in the assessment:

  1. What do you want to protect?
  2. Who do you want to protect it from?
  3. How likely is it that you will need to protect it?
  4. How bad are the consequences if you fail?
  5. How much trouble are you willing to go through in order to try to prevent those?

are useful whether you are discussing cyber, physical or national security.

Assuming you accept the proposition a “…no sparrow shall fall…” system is literally impossible.

In the light of terrorist attacks, talking heads call for this to “…never happen again….” Nonsense. Of course terror attacks will happen again. No matter what counter-measures are taken.

Consider bank robberies for instance. We know where all the banks are located. Never a question of where bank robberies will take place. But, given other values, such as customer convenience, it isn’t possible to prevent all bank robberies.

There is an acceptable rate of bank robbery and security measures keep it roughly at that rate.

The same is true for cyber, physical or national security.

This threat assessment exercise will help you create a fact-based assessment of your risk and the steps you take to counter it.

Better a fact-based assessment than the talking head variety.

I first saw this in a tweet by the EFF.

California Surveillance Sweep – Official News

Filed under: Government,Privacy — Patrick Durusau @ 1:31 pm

As I predicted in California Surveillance Sweep – Success!, a preliminary report by Dave Maass, Here are 79 California Surveillance Tech Policies. But Where Are the Other 90?, outlines the success:

Laws are only as strong as their enforcement.

That’s why last weekend more than 30 citizen watchdogs joined EFF’s team to hold California law enforcement and public safety agencies accountable. Together, we combed through nearly 170 California government websites to identify privacy and usage policies for surveillance technology that must now be posted online under state law.

You can tell from the headline that some 90 websites are missing surveillance policies required by law.

See Dave’s post for early analysis of the results, more posts to follow on the details.

This crowd-sourcing was an experiment for the EFF and I am hopeful they will provide similar opportunities to participate in the future.

Age has made me less useful at the barricades but I can still wield a keyboard. It was a real joy to contribute to such a cause.

Along those lines, consider joining the Electronic Frontier Alliance:

a new network we’ve [EFF] launched to increase grassroots activism on digital civil liberties issues around the country

Most of my readers have digital skills oppressors only dream about.

It’s up to you where you put them to work.

April 11, 2016

Knights of Ignorance (Burr and Feinstein) Hold Tourney With No Opponents

Filed under: Cryptography,Government,Journalism,News,Privacy,Reporting — Patrick Durusau @ 8:27 pm

Burr And Feinstein Plan One Sided Briefing For Law Enforcement To Bitch About ‘Going Dark’ by Mike Masnick.

From the post:

With the world mocking the sheer ignorance of their anti-encryption bill, Senators Richard Burr and Dianne Feinstein are doubling down by planning a staff “briefing” on the issue of “going dark” with a panel that is made up entirely of law enforcement folks. As far as we can tell, it hasn’t been announced publicly, but an emailed announcement was forwarded to us, in which they announce the “briefing” (notably not a “hearing“) on “barriers to law enforcement’s ability to lawfully access the electronic evidence they need to identify suspects, solve crimes, exonerate the innocent and protect communities from further crime.” The idea here is to convince others in Congress to support their ridiculous bill by gathering a bunch of staffers and scaring them with bogeyman stories of “encryption caused a crime wave!” As such, it’s no surprise that the panelists aren’t just weighted heavily in one direction, they’re practically flipping the boat. Everyone on the panel comes from the same perspective, and will lay out of the argument for “encryption bad!”

An upside to the approaching farce is it identifies people who possess “facts” to support the “encryption bad” position.

Given fair warning of their identities, what can you say about these “witnesses?”

Do you think some enterprising reporter will press them for detailed facts and not illusory hand waving? (I realize Senators are never pressed, not really, for answers. Reporters want the next interview. But these witnesses aren’t Senators.)

For example, Hillar C. Moore, III, has campaigned for a misdemeanor jail to incarcerate traffic offenders in order to lower violent crime.

“He said Wednesday that he believes the jail is an urgent public safety tool that could lower violent crime in the city. “This summer, we didn’t have the misdemeanor jail, and while it’s not responsible for every murder, this is responsible for the crime rate being slightly higher,” Moore said. “Baton Rouge could have done better than other cities, but we missed out on that. It’s time for everyone to get on board and stop looking the other way.”

Moore’s office asked the East Baton Rouge Parish Metro Council in recent weeks for authorization to use dedicated money to open a misdemeanor jail on a temporary basis, two weeks at a time for the next several months, to crack down on repeat offenders who refuse to show up in court.

The request was rejected by the council, after opponents accused law enforcement officials of using the jail to target nonviolent, low-income misdemeanor offenders as a way to shake them down for money for the courts. More than 60 percent of misdemeanor warrants are traffic-related offenses, and critics angrily took issue with a proposal that potentially could result in jailing traffic violators.”

Evidence and logic aren’t Hillar’s strong points.

That’s one fact about one of the prospective nut-job witnesses.

What’s your contribution to discrediting this circus of fools?

4330 Data Scientists and No Data Science Renee

Filed under: Data Science,Web Scrapers — Patrick Durusau @ 4:22 pm

After I posted 1880 Big Data Influencers in CSV File, I got a tweet from Data Science Renee pointing out that her name wasn’t in the list.

Renee does a lot more on “data science” and not so much on “big data,” which sounded like a plausible explanation.

Even if “plausible,” I wanted to know if there was some issue with my scrapping of Right Relevance.

Knowing that Renee’s influence score for “data science” is 81, I set the query to scrape the list between 65 and 98, just to account for any oddities in being listed.

The search returned 1832 entries. Search for Renee, nada, no got. Here’s the 1832-data-science-list.

In an effort to scrape all the listings, which should be 10,375 influencers, I set the page delay up to Ted Cruz reading speed. Ten entries every 72,000 milliseconds. 😉

That resulted in 4330-data-science-list.

No joy, no Renee!

It isn’t clear to me why my scraping fails before recovering the entire data set but in any reasonable sort order, a listing of roughly 10K data scientists should have Renee in the first 100 entries, much less the first 1,000 or even first 4K.

Something is clearly amiss with the data but what?

Check me on the first ten entries for data science as the search term but I find:

  • Hilary Mason
  • Kirk Borne – no data science
  • Nathan Yau
  • Gregory Piatetsky – no data science
  • Randy Olson
  • Jeff Hammerbacher – no data science
  • Chris Dixon @cdixon – no data science
  • dj patil @dpatil
  • Doug Laney – no data science
  • Big Data Science no data science

The notation, “no data science,” means that entry does not have a label for data science. Odd considering that my search was specifically for influencers in “data science.” The same result obtains if you choose one of the labels instead of searching. (I tried.)

Clearly all of these people could be listed for “data science,” but if I am searching for that specific category, why is that missing from six of the first ten “hits?”

As far as Data Science Renee, I can help you with that to a degree. Follow @BecomingDataSci, or @DataSciGuide, @DataSciLearning & @NewDataSciJobs. Visit her website: http://t.co/zv9NrlxdHO. Podcasts, interviews, posts, just a hive of activity.

On the mysteries of Right Relevance and its data I’m not sure what to say. I posted feedback a week ago mentioning the issue with scraping and ordering, but haven’t heard back.

The site has a very clever idea but looking in from the outside with a sample size of 1, I’m not impressed with its delivery on that idea.

Issues I don’t know about with Web Scraper?

If you have contacts with Right Relevance could you gently ping them for me? Thanks!

SEMS 2016 (Auditable Spreadsheets – Quick Grab Your Heart Pills)

Filed under: Programming,Spreadsheets,Transparency — Patrick Durusau @ 3:12 pm

3rd International Workshop on Software Engineering Methods in Spreadsheets

July 4, 2016 Vienna, Austria

Abstracts due: April 11th (that’s today!)

Papers due: April 22nd

From the webpage:

SEMS is the #1 venue for academic spreadsheet research since 2014 (SEMS’14, SEMS’15). This year, SEMS’16 is going to be co-located with STAF 2016 in Vienna.

Spreadsheets are heavily used in industry as they are easy to create and evolve through their intuitive visual interface. They are often initially developed as simple tools, but, over time, spreadsheets can become increasingly complex, up to the point they become too complicated to maintain. Indeed, in many ways, spreadsheets are similar to “professional” software: both concern the storage and manipulation of data, and the presentation of results to the user. But unlike in “professional” software, activities like design, implementation, and maintenance in spreadsheets have to be undertaken by end-users, not trained professionals. This makes applying methods and techniques from other software technologies a challenging task.

The role of SEMS is to explore the possibilities of adopting successful methods from other software contexts to spreadsheets. Some, like testing and modeling, have been tried before and can be built upon. For methods that have not yet been tried on spreadsheets, SEMS will serve as a platform for early feedback.

The SEMS program will include an industrial keynote, followed by a brainstorming session about the topic, a discussion panel of industrial spreadsheet usage, presentation of short and long research papers and plenty of lively discussions. The intended audience is a mixture of spreadsheet researchers and professionals.

Felienne Hermans pioneered viewing spreadsheets as programming artifacts, a view that can result in easier maintenance and even, gasp, auditing of spreadsheets.

Inspectors General, GAO and other birds of that feather should sign up for this conference.

Remember topic maps for cumulative and customized auditing data. For example, who, by name, was explaining entries that several years later appear questionable? Topic maps can capture as much or as little data as you require.

Attend, submit an abstract today and a paper in two weeks!

Targeting Nuclear Weapons/Facilities

Filed under: Government,Politics — Patrick Durusau @ 10:45 am

Nuclear Facilities Attack Database (NuFAD)

From the about page:

The Nuclear Facilities Attack Database (NuFAD) is a global database recording assaults, sabotages and unarmed breaches of nuclear facilities. The database emerged when several START researchers sought to explore the potential terrorist threat to nuclear facilities and discovered that there was a general lack of systematic open source data on the topic. What followed was a comprehensive attempt to identify the most relevant data from among the numerous historical anecdotes, unsubstantiated reports and vague references to attacks.

The resulting Nuclear Facility Attack Database (NuFAD) contains 80 cases identified from open sources. A full source listing can be found here.

Access the database here:

The NuFAD is described and analyzed extensively in an upcoming publication:

Gary A. Ackerman and James Halverson, “Attacking Nuclear Facilities: Hype or Genuine Threat?” in Brecht Volders, Tom Sauer (eds.) Nuclear Terrorism: Countering the Threat, New York, Routledge (2016)

This initial beta version of the database provides brief summaries of each incident, an interactive map and timeline, and the ability to filter out cases that do not meet the criteria desired by the user. Future versions will include greater detail on each incident and other features as described in greater depth in the FAQ section. Please direct feedback concerning the database to James Halverson at jhalvers@umd.edu.

The “threat” of nuclear terrorism primes funding pump in ways that few other threats can manage.

Mostly because a nuclear weapon negates the physical separation security of elites and world leaders.

Or put differently, you don’t have to be a very good shot with a tactical nuke. (Cf. John Hinckley Jr.)

For the general population, gunshot, car bomb or tactical nuke, the distinction is meaningless. Dead is dead.

How indifferent are the elites to deaths in the general population?

Millions have died since the turn of the century in various wars and attacks.

Can you name one member of the elites, any country, who has died in those wars/attacks?

Neither can I.

April 10, 2016

NSA Grade – Network Visualization with Gephi

Filed under: Gephi,Networks,R,Visualization — Patrick Durusau @ 5:07 pm

Network Visualization with Gephi by Katya Ognyanova.

It’s not possible to cover Gephi in sixteen (16) pages but you will wear out more than one printed copy of these sixteen (16) pages as you become experienced with Gephi.

This version is from a Gephi workshop at Sunbelt 2016.

Katya‘s homepage offers a wealth of network visualization posts and extensive use of R.

Follow her at @Ognyanova.

PS: Gephi equals or exceeds visualization capabilities in use by the NSA, depending upon your skill as an analyst and the quality of the available data.

Climate Change: Earth Surface Temperature Data

Filed under: Climate Data,Modeling — Patrick Durusau @ 4:17 pm

Climate Change: Earth Surface Temperature Data by Berkeley Earth.

From the webpage:

Some say climate change is the biggest threat of our age while others say it’s a myth based on dodgy science. We are turning some of the data over to you so you can form your own view.

Even more than with other data sets that Kaggle has featured, there’s a huge amount of data cleaning and preparation that goes into putting together a long-time study of climate trends. Early data was collected by technicians using mercury thermometers, where any variation in the visit time impacted measurements. In the 1940s, the construction of airports caused many weather stations to be moved. In the 1980s, there was a move to electronic thermometers that are said to have a cooling bias.

Given this complexity, there are a range of organizations that collate climate trends data. The three most cited land and ocean temperature data sets are NOAA’s MLOST, NASA’s GISTEMP and the UK’s HadCrut.

We have repackaged the data from a newer compilation put together by the Berkeley Earth, which is affiliated with Lawrence Berkeley National Laboratory. The Berkeley Earth Surface Temperature Study combines 1.6 billion temperature reports from 16 pre-existing archives. It is nicely packaged and allows for slicing into interesting subsets (for example by country). They publish the source data and the code for the transformations they applied. They also use methods that allow weather observations from shorter time series to be included, meaning fewer observations need to be thrown away.

All the computation on climate change is ironic in the face of a meteorologist, Edward R. Lorenz, publishing in 1963, Deterministic Nonperiodic Flow.

You may know that better as the “butterfly effect.” That very small changes in starting conditions can result in very large final states, which are not subject to prediction.

If you find Lorenz’s original paper tough sledding, you may enjoy When the Butterfly Effect Took Flight by Peter Dizikes. (Be aware the links to Lorenz papers in that post are broken, or at least appear to be today.)

In debates about limiting the increase in global temperature, recall that no one knows where any “tipping points” may lie along the way. That is the recognition of “tipping points” is always post tipping.

Given the multitude of uncertainties in modeling climate and the money to be made by solutions chosen or avoided, what do you think will be driving climate research? National interests and priorities or some other criteria?

PS: Full disclosure. Humanity has had, is having an impact on the climate and not for the better, at least in terms of human survival. Whether we are capable of changing human behavior enough to alter results that won’t be seen for fifty or more years remains to be seen.

When Mapping Fails – Big Time

Filed under: Mapping,Maps — Patrick Durusau @ 11:15 am

How an internet mapping glitch turned a random Kansas farm into a digital hell by Kashmir Hill.

From the post:

An hour’s drive from Wichita, Kansas, in a little town called Potwin, there is a 360-acre piece of land with a very big problem.

The plot has been owned by the Vogelman family for more than a hundred years, though the current owner, Joyce Taylor née Vogelman, 82, now rents it out. The acreage is quiet and remote: a farm, a pasture, an old orchard, two barns, some hog shacks and a two-story house. It’s the kind of place you move to if you want to get away from it all. The nearest neighbor is a mile away, and the closest big town has just 13,000 people. It is real, rural America; in fact, it’s a two-hour drive from the exact geographical center of the United States.

But instead of being a place of respite, the people who live on Joyce Taylor’s land find themselves in a technological horror story.

For the last decade, Taylor and her renters have been visited by all kinds of mysterious trouble. They’ve been accused of being identity thieves, spammers, scammers and fraudsters. They’ve gotten visited by FBI agents, federal marshals, IRS collectors, ambulances searching for suicidal veterans, and police officers searching for runaway children. They’ve found people scrounging around in their barn. The renters have been doxxed, their names and addresses posted on the internet by vigilantes. Once, someone left a broken toilet in the driveway as a strange, indefinite threat.

All in all, the residents of the Taylor property have been treated like criminals for a decade. And until I called them this week, they had no idea why.

If you use “IP mapping,” you owe it to yourself and your customers to give Kahsmir’s story a close and careful read.

Nope, no spoilers, you have to read the story to appreciate how reasonable and good faith decisions can over time result in very bad unintended consequences.

Enjoy!

PS: What sort of implied threat is a broken toilet? Just curious, thought you might know.

A Challenge for Wannabe LamdaConf 2016 Censors

Filed under: Censorship,Free Speech — Patrick Durusau @ 10:46 am

LamdaConf 2016 has become the target of a long list of self-confessed censors, who have taken it upon themselves to object to the selection of Curtis Yarvin as a speaker at that conference.

Authors aren’t identified in the program listing.

Here’s the challenge to wannabe LamdaConf 2016 censors: Which of these talks promote racism, etc.? (You can see the full descriptions here. I omitted the prose in the interest of space.)

  • 4 Weird Tricks to Become a Better Functional Programmer
  • A Board Game Night with Geeks
  • All About a Fold
  • An Immutable State Machine
  • Coding Under Uncertainty
  • Dialyzer: Optimistic Type Checking for Erlang and Elixir
  • Discrete Time and Race Conditions
  • Exotic Functional Data Structures: Off-heap Functionally Persistent Fractal Trees
  • Extracting Useful Information from your Code Repository using F#
  • Functional Algebra for Middle School Students
  • Functional Programming is Overrated
  • Functional Programming: Destination or Origin?
  • Functional Reactive Programming for Natural User Interfaces
  • Functional Refactoring
  • Functional Relational Programming In UI Programming
  • Functional Web Programming: An Empirical Overview
  • How Environment and Experience Shape the Brain
  • How to Get Started with Functional Programming
  • How to Use Covariance and Contravariance to Build Flexible and Robust Programs
  • Interactive Tests and Documentation via QuickCheck-style Declarations
  • Make Your Own Lisp Interpreter in 10 Incremental Steps
  • Mastering Apache Spark
  • MTL Versus Free: Deathmatch!
  • Named and Typed Homoiconicity
  • No If’s, Cond’s, or Bool’s About It!
  • Panel: The Functional Front-End
  • Program Derivation for Functional Languages
  • Purely Functional Semantic and Syntax Expression Composition
  • Queries Inside Out: The Algorithms of your Relational Database in Clojure
  • RankNTypes Ain’t Rank at All
  • Real-World Gobbledygook
  • Servant – How to Create a Clean Web API
  • The Easy-Peasy-Lemon-Squeezy, Statically-Typed, Purely Functional Programming Workshop for All!
  • The Keys to Collaboration
  • The Missing Diamond of Scala Variance
  • The Next Great Functional Programming Language: Year 2
  • Type Kwon Do
  • Type Systems for Alchemy
  • Type-Level Hold’em: Encoding the Rules of Poker with Shapeless
  • Types for Ancient Greek
  • Typesafe Data Frames with Shapeless
  • Urbit: A Clean-Slate Functional Operating Stack
  • What Would Happen if REST Were Immutable?
  • Who Let Algebra Get Funky with my Data Types?
  • Witchcraft: Experiments Getting Higher-Order Abstractions into Elixir
  • Your Esoteric Benefactor: The Simple Richness of Lambda Calculus

Identify the objectionable talk(s) in your comments below.

As far as Curtis Yarvin, reflect on how your attempts at censorship have given a broader stage to his non-programming ideas. That’s all on you, not Yarvin.

Yet another illustration of why censorship is such a very bad idea. Always.

PS: As far as diversity, practicing diversity is far more effective than self-righteous denouncement of failure to practice diversity in others. Self-practice of diversity requires day to day effort.

California Surveillance Sweep – Success!

Filed under: Government,Privacy — Patrick Durusau @ 8:57 am

I mentioned the California Surveillance Sweep effort in Walking the Walk on Privacy.

Just a reminder:

Join EFF on Saturday, April 9 for a first-of-its-kind crowdsourcing campaign to hold California law enforcement agencies accountable for their use of surveillance technologies.

Volunteers like you will help us track down the privacy and useage policies of law enforcement agencies across California and add them to our database. We’ll show you how to do it, and you can be anywhere with an Internet connection to participate.

That was yesterday and I got word this morning that the effort was a complete success!

The EFF will be announcing more details but I wanted to give a quick shout out to everyone who participated in this effort!

It isn’t a hammer strike against the forces of darkness but then successful resistance rarely has that luxury.


The project design made participation easy and some elements that should be repeated in the future are:

  • Each volunteer was sent a set of small tasks which could be completed in a short period of time. Data entry immediately followed each task, generating a sense of accomplishment.
  • The data entry form was short and well labeled and designed.
  • Multiple volunteers got the same tasks to enable cross-checking
  • Users chose one-time handles for this project only, encouraging a sense of being in the “resistance.” We’re all human and grouping is something that comes naturally to us. Even with anonymous others in a common cause. Encourage that feeling.

Every project will be different but those principles are the ones I observed in operation in the California Surveillance Sweep.

Others?

April 9, 2016

No Perception Without Cartography [Failure To Communicate As Cartographic Failure]

Filed under: Perception,Subject Identity,Topic Maps — Patrick Durusau @ 8:12 pm

Dan Klyn tweeted:

No perception without cartography

with an image of this text (from Self comes to mind: constructing the conscious mind by Antonio R Damasio):


The nonverbal kinds of images are those that help you display mentally the concepts that correspond to words. The feelings that make up the background of each mental instant and that largely signify aspects of the body state are images as well. Perception, in whatever sensory modality, is the result of the brain’s cartographic skill.

Images represent physical properties of entities and their spatial and temporal relationships, as well as their actions. Some images, which probably result from the brain’s making maps of itself making maps, are actually quite abstract. They describe patterns of occurrence of objects in time and space, the spatial relationships and movement of objects in terms of velocity and trajectory, and so forth.

Dan’s tweet spurred me to think that our failures to communicate to others could be described as cartographic failures.

If we use a term that is unknown to the average reader, say “daat,” the reader lacks a mental mapping that enables interpretation of that term.

Even if you know the term, it doesn’t stand in isolation in your mind. It fits into a number of maps, some of which you may be able to articulate and very possibly into other maps, which remain beyond your (and our) ken.

Not that this is a light going off experience for you or me but perhaps the cartographic imagery may be helpful in illustrating both the value and the risks of topic maps.

The value of topic maps is spoken of often but the risks of topic maps rarely get equal press.

How would topic maps be risky?

Well, consider the average spreadsheet using in a business setting.

Felienne Hermans in Spreadsheets: The Ununderstood Dark Matter of IT makes a persuasive case that spreadsheets are on an average five years old with little or no documentation.

If those spreadsheets remain undocumented, both users and auditors are equally stymied by their ignorance, a cartographic failure that leaves both wondering what must have been meant by columns and operations in the spreadsheet.

To the extent that a topic map or other disclosure mechanism preserves and/or restores the cartography that enables interpretation of the spreadsheet, suddenly staff are no longer plausibly ignorant of the purpose or consequences of using the spreadsheet.

Facile explanations that change from audit to audit are no longer possible. Auditors are chargeable with consistent auditing from one audit to another.

Does it sound like there is going to be a rush to use topic maps or other mechanisms to make spreadsheets transparent?

Still, transparency that befalls one could well benefit another.

Or to paraphrase King David (2 Samuel 11:25):

Don’t concern yourself about this. In business, transparency falls on first one and then another.

Ready to inflict transparency on others?

Visual Guide to Senate Crypto Bill [First Draft In Crayon?]

Filed under: Cryptography,Cybersecurity,Government — Patrick Durusau @ 7:13 pm

The Senate crypto bill is comically bad: A visual guide by Isaac Potoczny-Jones.

From the post:

If you’re curious about the draft text for the senate crypto bill please, read the text for yourself or a summary on Wired. If you have ever used a security product, you’ll probably quickly realize that it would make most (if not all) encryption illegal.

For example, a product like an encrypted hard drive is covered since seagate provides a process for storing data. Upon a court order, seagate must provide the data on that drive by making it intelligible, either by never encrypting it or if it is encrypted, they must decrypt it.

The following graphic is provided to illustrate the paths by which nearly all secure storage or communication needs to have a back door.

You have to see the graphic to truly appreciate how lame the draft Senate crypto bill is in fact.

These are the same people who are responsible for the “riddle, wrapped in a mystery, inside an enigma” that is the Internal Revenue Code (IRC).

I can’t say it is a fact, but I suspect the first draft of the Senate crypto bill was in crayon.

What do you think?

20 Slack Apps You’ll Love

Filed under: Information Overload,Software — Patrick Durusau @ 6:58 pm

20 Slack Apps You’ll Love

From the post:

Slack is taking the business world by storm. More and more companies are using this communication tool—and it’s becoming an increasingly robust platform due to all of the integrations being built on top of it. Now, you can do pretty much everything in Slack—from tracking how your customers use your app, to keeping tabs on company finances at a glance, to getting a daily digest of top news from around the web.

Here are 20 of the Product Hunt community’s most-loved Slack integrations. Trust us—once you give some of these a try, you’ll wonder how you ever made it through the day without them.

I tagged this under “information overload” even though I use and enjoy Slack, the last thing I need is another app to “manage” information flow.

What I desperately need is a mechanism that filters (no cat pics), promotes important content (not necessarily the most tweeted/reposted), integrates from multiple sources/feeds (think no repetition, how many times need I see “bombing in Brussels,” I get it), provides one-touch access to a history that is also governed by the same rules.

Although I gather and process large collections of information, I can only work closely with 10 to 20 results at any given point.

More than 20 “hits” is just advertising for the “depth/breath” of your search mechanism. Or perhaps my lack of skill with your search mechanism.

You are very likely to find a useful app in this collection but if not, don’t despair! The post concludes with a link to a list of over 350 more Slack apps!

Enjoy!

Tracking Down Superdelegates – Data In Action

Filed under: Crowd Sourcing,Government,Politics — Patrick Durusau @ 9:18 am

The Republican party has long been known for its efforts to exclude voters from the voting process altogether. Voter ID laws, purging voter rolls, literacy tests, etc.

The Democratic party skips the difficulty of dealing with voters because votes are irrelevant in the selection of superdelegates for presidential nominations. Gives the appearance of greater democracy while being on par with the Republicans.

To counter the Democratic Party’s lack of democracy, Spenser Thayer created the Superdelegate Hit List.

There was a predictable reaction of the overly sensitive media types and the privileged (by definition, superdelegates number among the privileged) to the use of “hit list” in the name.

Avoidance of personal accountability is a characteristic of the privileged.

The goal being to persuade the privileged, not to ride them down, the more appropriate response to privilege, the site name was changed to “Superdelegate List,” removing “hit.”

Here is the current list of superdelegates.

As you can see, the list is “sparse” on contact information.

You can help the Democratic Party be democratic by contributing data http://superdelegatelist.com/form.

Weekend Hacking Target: Modems

Filed under: Cybersecurity,Security — Patrick Durusau @ 8:26 am

If you’re not buried in the Lucene/Solr 6.0 release, you may be interested in weekend hacking practice.

Swati Khandelwal reports on an easy hack of cable modems in No Password Required! 135 Million Modems Open to Remote Factory Reset.

From the post:

More than 135 Million modems around the world are vulnerable to a flaw that can be exploited remotely to knock them offline by cutting off the Internet access.

The simple and easily exploitable vulnerability has been uncovered in one of the most popular and widely-used cable modem, the Arris SURFboard SB6141, used in Millions of US households.

Security researcher David Longenecker discovered a loophole that made these modems vulnerable to unauthenticated reboot attacks. He also released his “exploit” after Arris (formerly Motorola) stopped responding to him despite a responsible disclosure.

The Bug is quite silly: No Username and Password Protection.

See Swati’s post for the details on the hack.

Before you go looking for wifi hotspots and vulnerable modems, remember hacking law enforcement, other government agencies, indeed, any modem/network may be criminal activity.

As I have pointed out before, legal liability for vendors is the answer to this type of defect. It has worked in other areas of products liability and there is no reason why it could not work for computer software/hardware.

Good hunting!

« Newer PostsOlder Posts »

Powered by WordPress