Archive for the ‘Programming’ Category

Numba Versus C++ – On Wolfram CAs

Tuesday, March 6th, 2018

Numba Versus C++ by David Butts, Gautham Dharuman, Bill Punch and Michael S. Murillo.

Python is a programming language that first appeared in 1991; soon, it will have its 27th birthday. Python was created not as a fast scientific language, but rather as a general-purpose language. You can use Python as a simple scripting language or as an object-oriented language or as a functional language…and beyond; it is very flexible. Today, it is used across an extremely wide range of disciplines and is used by many companies. As such, it has an enormous number of libraries and conferences that attract thousands of people every year.

But, Python is an interpreted language, so it is very slow. Just how slow? It depends, but you can count on about 10-100 times as slow as, say, C/C++. If you want fast code, the general rule is: don’t use Python. However, a few more moments of thought lead to a more nuanced perspective. What if you spend most of the time coding, and little time actually running the code? Perhaps your familiarity with the (slow) language, or its vast set of libraries, actually saves you time overall? And, what if you learned a few tricks that made your Python code itself a bit faster? Maybe that is enough for your needs? In the end, for true high performance computing applications, you will want to explore fast languages like C++; but, not all of our needs fall into that category.

As another example, consider the fact that many applications use two languages, one for the core code and one for the wrapper code; this allows for a smoother interface between the user and the core code. A common use case is C or C++ wrapped by, of course, Python. As a user, you may not even know that the code you are using is in another language! Such a situation is referred to as the “two-language problem”. This situation is great provided you don’t need to work in the core code, or you don’t mind working in two languages – some people don’t mind, but some do. The question then arises: if you are one of those people who would like to work only in the wrapper language, because it was chosen for its user friendliness, what options are available to make that language (Python in this example) fast enough that it can also be used for the core code?

We wanted to explore these ideas a bit further by writing a code in both Python and C++. Our past experience suggested that while Python is very slow, it could be made about as fast as C using the crazily-simple-to-use library Numba. Our basic comparisons here are: basic Python, Numba and C++. Because we are not religious about Python, and you shouldn’t be either, we invited expert C++ programmers to have the chance to speed up the C++ as much as they could (and, boy could they!).

This webpage is highly annoying, in both Mozilla and Chrome. You’ll have to visit to get the full impact.

It is, however, also a great post on using Numba to obtain much faster results while still using Python. The use of Wolfram CAs (cellular automata) as examples is an added bonus.


Evolving a Decompiler

Wednesday, February 14th, 2018

Evolving a Decompiler by Matt Noonan.

From the post:

Back in 2016, Eric Schulte, Jason Ruchti, myself, Alexey Loginov, and David Ciarletta (all of the research arm of GrammaTech) spent some time diving into a new approach to decompilation. We made some progress but were eventually all pulled away to other projects, leaving a very interesting work-in-progress prototype behind.

Being a promising but incomplete research prototype, it was quite difficult to find a venue to publish our research. But I am very excited to announce that I will be presenting this work at the NDSS binary analysis research (BAR) workshop next week in San Diego, CA! BAR is a workshop on the state-of-the-art in binary analysis research, including talks about working systems as well as novel prototypes and works-in-progress; I’m really happy that the program committee decided to include discussion of these prototypes, because there are a lot of cool ideas out there that aren’t production-ready, but may flourish once the community gets a chance to start tinkering with them.

How wickedly cool!

Did I mention all the major components are open-source?

GrammaTech recently open-sourced all of the major components of BED, including:

  • SEL, the Software Evolution Library. This is a Common Lisp library for program synthesis and repair, and is quite nice to work with interactively. All of the C-specific mutations used in BED are available as part of SEL; the only missing component is the big code database; just bring your own!
  • clang-mutate, a command-line tool for performing low-level mutations on C and C++ code. All of the actual edits are performed using clang-mutate; it also includes a REPL-like interface for interactively manipulating C and C++ code to quickly produce variants.

The building of the “big code database” sounds like an exercise in subject identity doesn’t it?

Topic maps anyone?

What the f*ck Python! 🐍

Tuesday, February 6th, 2018

What the f*ck Python! 🐍

From the post:

Python, being a beautifully designed high-level and interpreter-based programming language, provides us with many features for the programmer’s comfort. But sometimes, the outcomes of a Python snippet may not seem obvious to a regular user at first sight.

Here is a fun project to collect such tricky & counter-intuitive examples and lesser-known features in Python, attempting to discuss what exactly is happening under the hood!

While some of the examples you see below may not be WTFs in the truest sense, but they’ll reveal some of the interesting parts of Python that you might be unaware of. I find it a nice way to learn the internals of a programming language, and I think you’ll find them interesting as well!

If you’re an experienced Python programmer, you can take it as a challenge to get most of them right in first attempt. You may be already familiar with some of these examples, and I might be able to revive sweet old memories of yours being bitten by these gotchas 😅

If you’re a returning reader, you can learn about the new modifications here.

So, here we go…

What better way to learn than being really pissed off that your code isn’t working? Or isn’t working as expected.


This looks like a real hoot! Too late today to do much with it but I’ll be returning to it.


IDA v7.0 Released as Freeware – Comparison to The IDA Pro Book?

Saturday, February 3rd, 2018

IDA v7.0 Released as Freeware

From the download page:

The freeware version of IDA v7.0 has the following limitations:

  • no commercial use is allowed
  • lacks all features introduced in IDA > v7.0
  • lacks support for many processors, file formats, debugging etc…
  • comes without technical support

Copious amounts of documentation are online.

I haven’t seen The IDA Pro Book by Chris Eagle, but it was published in 2011. Do you know anyone who has compared The IDA Pro Book to version 7.0?

Two promising pages: IDA Support Overview and IDA Support: Links (external).

Python’s One Hundred and Thirty-Nine Week Lectionary Cycle

Wednesday, January 31st, 2018

Python 3 Module of the Week by Doug Hellmann

From the webpage:

PyMOTW-3 is a series of articles written by Doug Hellmann to demonstrate how to use the modules of the Python 3 standard library….

Hellman documents one hundred and thirty-nine (139) modules in the Python standard library.

How many of them can you name?

To improve your score, use Hellman’s list as a one hundred and thirty-nine (139) week lectionary cycle on Python.

Some modules may take less than a week, but some, re — Regular Expressions, will take more than a week.

Even if you don’t finish a longer module, push on after two weeks so you can keep that feeling of progress and encountering new material.

Don Knuth Needs Your Help

Monday, January 22nd, 2018

Donald Knuth Turns 80, Seeks Problem-Solvers For TAOCP

From the post:

An anonymous reader writes:

When 24-year-old Donald Knuth began writing The Art of Computer Programming, he had no idea that he’d still be working on it 56 years later. This month he also celebrated his 80th birthday in Sweden with the world premier of Knuth’s Fantasia Apocalyptica, a multimedia work for pipe organ and video based on the bible’s Book of Revelations, which Knuth describes as “50 years in the making.”

But Knuth also points to the recent publication of “one of the most important sections of The Art of Computer Programming” in preliminary paperback form: Volume 4, Fascicle 6: Satisfiability. (“Given a Boolean function, can its variables be set to at least one pattern of 0s and 1 that will make the function true?”)

Here’s an excerpt from its back cover:

Revolutionary methods for solving such problems emerged at the beginning of the twenty-first century, and they’ve led to game-changing applications in industry. These so-called “SAT solvers” can now routinely find solutions to practical problems that involve millions of variables and were thought until very recently to be hopelessly difficult.

“in several noteworthy cases, nobody has yet pointed out any errors…” Knuth writes on his site, adding “I fear that the most probable hypothesis is that nobody has been sufficiently motivated to check these things out carefully as yet.” He’s uncomfortable printing a hardcover edition that hasn’t been fully vetted, and “I would like to enter here a plea for some readers to tell me explicitly, ‘Dear Don, I have read exercise N and its answer very carefully, and I believe that it is 100% correct,'” where N is one of the exercises listed on his web site.

Elsewhere he writes that two “pre-fascicles” — 5a and 5B — are also available for alpha-testing. “I’ve put them online primarily so that experts in the field can check the contents before I inflict them on a wider audience. But if you want to help debug them, please go right ahead.”

Do you have some other leisure project for 2018 that is more important?


Fun, Frustration, Curiosity, Murderous Rage – mimic

Monday, January 15th, 2018


From the webpage:

There are many more characters in the Unicode character set that look, to some extent or another, like others – homoglyphs. Mimic substitutes common ASCII characters for obscure homoglyphs.

Fun games to play with mimic:

  • Pipe some source code through and see if you can find all of the problems
  • Pipe someone else’s source code through without telling them
  • Be fired, and then killed

I can attest to the murderous rage from experience. There was a browser-based SGML parser that would barf on the presence of an extra whitespace (space I think) in the SGML declaration. One file worked, another with the “same” declaration did not.

Only by printing and comparing the files (this was on Windoze machines) was the errant space discovered.


Learn to Write Command Line Utilities in R

Thursday, December 21st, 2017

Learn to Write Command Line Utilities in R by Mark Sellors.

From the post:

Do you know some R? Have you ever wanted to write your own command line utilities, but didn’t know where to start? Do you like Harry Potter?

If the answer to these questions is “Yes!”, then you’ve come to the right place. If the answer is “No”, but you have some free time, stick around anyway, it might be fun!

Sellors invokes the tradition of *nix command line tools saying: “The thing that most [command line] tools have in common is that they do a small number of things really well.”

The question to you is: What small things do you want to do really well?

Practicing Vulnerability Hunting in Programming Languages for Music

Tuesday, December 19th, 2017

If you watched Natalie Silvanovich‘s presentation on mining the JavaScript standard for vulnerabilities, the tweet from Computer Science @CompSciFact pointing to Programming Languages Used for Music must have you drooling like one of Pavlov‘s dogs.

I count one hundred and forty-seven (147) languages, of varying degrees of popularity, none of which has gotten the security review of ECMA-262. (Michael Aranda wades through terminology/naming issues for ECMAScript vs. JavaScript at: What’s the difference between JavaScript and ECMAScript?.)

Good hunting!

A Little Story About the `yes` Unix Command

Tuesday, December 12th, 2017

A Little Story About the `yes` Unix Command by Matthais Endler.

From the post:

What’s the simplest Unix command you know?

There’s echo, which prints a string to stdout and true, which always terminates with an exit code of 0.

Among the rows of simple Unix commands, there’s also yes. If you run it without arguments, you get an infinite stream of y’s, separated by a newline:

Ever installed a program, which required you to type “y” and hit enter to keep going? yes to the rescue!

Endler sets out to re-implement the yes command in Rust.

Why re-implement Unix tools?

The trivial program yes turns out not to be so trivial after all. It uses output buffering and memory alignment to improve performance. Re-implementing Unix tools is fun and makes me appreciate the nifty tricks, which make our computers fast.

Endler’s story is unlikely to replace any of your holiday favorites but unlike those, it has the potential to make you a better programmer.

Releasing Failed Code to Distract from Accountability

Sunday, December 10th, 2017

Dutch government publishes large project as Free Software by
Carmen Bianca Bakker.

From the post:

The Dutch Ministry of the Interior and Kingdom Relations released the source code and documentation of Basisregistratie Personen (BRP), a 100M€ IT system that registers information about inhabitants within the Netherlands. This comes as a great success for Public Code, and the FSFE applauds the Dutch government’s shift to Free Software.

Operation BRP is an IT project by the Dutch government that has been in the works since 2004. It has cost Dutch taxpayers upwards of 100 million Euros and has endured three failed attempts at revival, without anything to show for it. From the outside, it was unclear what exactly was costing taxpayers so much money with very little information to go on. After the plug had been pulled from the project earlier this year in July, the former interior minister agreed to publish the source code under pressure of Parliament, to offer transparency about the failed project. Secretary of state Knops has now gone beyond that promise and released the source code as Free Software (a.k.a. Open Source Software) to the public.

In 2013, when the first smoke signals showed, the former interior minister initially wanted to address concerns about the project by providing limited parts of the source code to a limited amount of people under certain restrictive conditions. The ministry has since made a complete about-face, releasing a snapshot of the (allegedly) full source code and documentation under the terms of the GNU Affero General Public License, with the development history soon to follow.

As far as the “…complete about-face…,” the American expression is: “You’ve been had.

Be appearing to agonize over the release of the source code, the “former interior minister” has made it appear the public has won a great victory for transparency.

Actually not.

Does the “transparency” offered by the source code show who authorized the expenditure of each part of the 100M€ total and who was paid that 100M€? Does source code “transparency” disclose project management decisions and who, in terms of government officials, approved those project decisions. For that matter, does source code “transparency” disclose discussions of project choices at all and who was present at those discussions?

It’s not hard to see that source code “transparency” is a deliberate failure on the part of the Dutch Ministry of the Interior and Kingdom Relations to be transparent. It has withheld, quite deliberately, any information that would enable Dutch citizens, programmers or otherwise, to have informed opinions about the failure of this project. Or to hold any accountable for its failure.

This may be:

…an unprecedented move of transparency by the Dutch government….

but only if the Dutch government is a black hole in terms of meaningful accountability for its software projects.

Which appears to be the case.

PS: Assuming Dutch citizens can pry project documentation out of the secretive Dutch Ministry of the Interior and Kingdom Relations, I know some Dutch topic mappers could assist with establishing transparency. If that’s what you want.

The Top-100 rated Devoxx Belgium 2017 talks (or the full 207)

Thursday, December 7th, 2017

The Top-100 rated Devoxx Belgium 2017 talks

The top-100 list has Devoxx Belgium 2017 talks sorted in voting order, with hyperlinks to the top 50.

If you are looking for more comprehensive coverage of Devoxx Belgium 2017, try the Devoxx Belgium 2017 YouTube Playlist, with 207 videos!

Kudos to Devoxx for putting conference content online to spread the word about technology.

So You Want to be a WIZARD [Spoiler Alert: It Requires Work]

Monday, November 20th, 2017

So You Want to be a WIZARD by Julia Evans.

I avoid using terms like inspirational, transforming, etc. because it is so rare that software, projects, presentations merit merit those terms.

Today I am making an exception to that rule to say:

So You Want to be a Wizard by Julia Evans can transform your work in computer science.

Notice the use of “can” in that sentence. No guarantees because unlike many promised solutions, Julia says up front that hard work is required to use her suggestions successfully.

That’s right. If these methods don’t work for you it will be because you did not apply them. (full stop)

No guarantees you will get praise, promotions, recognition, etc., as a result of using Julia’s techniques, but you will be a wizard none the less.

One consolation is that wizards rarely notice back-biters, office sycophants, and a range of other toxic co-workers. They are too busy preparing themselves to answer the next technical issue that requires a wizard.

10 Papers Every Developer Should Read (At Least Twice) [With Hyperlinks]

Thursday, November 16th, 2017

10 Papers Every Developer Should Read (At Least Twice) by Michael Feathers

Feathers omits hyperlinks for the 10 papers every developer should read, at least twice.

Hyperlinks eliminate searches by every reader, saving them time and load on their favorite search engine, not to mention providing access more quickly. Feathers’ list with hyperlinks follows.

Most are easy to read but some are rough going – they drop off into math after the first few pages. Take the math to tolerance and then move on. The ideas are the important thing.

See Feather’s post for his comments on each paper.

Even a shallow web composed of hyperlinks is better than no web at all.

Scipy Lecture Notes

Sunday, November 12th, 2017

Scipy Lecture Notes edited by Gaël Varoquaux, Emmanuelle Gouillart, Olav Vahtras.

From the webpage:

Tutorials on the scientific Python ecosystem: a quick introduction to central tools and techniques. The different chapters each correspond to a 1 to 2 hours course with increasing level of expertise, from beginner to expert.

In PDF format, some six-hundred and fifty-seven pages of top quality material on Scipy.

In addition to the main editors, there are fourteen chapter editors and seventy-three contributors.

Good documentation needs maintenance so if you improvements or examples to offer, perhaps your name will appear here in the not too distant future.


Introduction To ARM Assembly Basics [The Weakest Link?]

Friday, November 10th, 2017

Introduction To ARM Assembly Basics

The latest security fails by Intel and Microsoft capture media and blog headlines but ARM devices are more numerous.

ARM devices, like a Windows server in an unlocked closet, may be the weakest link in your next target.

From the webpage:

Welcome to this tutorial series on ARM assembly basics. This is the preparation for the followup tutorial series on ARM exploit development. Before we can dive into creating ARM shellcode and build ROP chains, we need to cover some ARM Assembly basics first.

The following topics will be covered step by step:

ARM Assembly Basics Tutorial Series:
Part 1: Introduction to ARM Assembly
Part 2: Data Types Registers
Part 3: ARM Instruction Set
Part 4: Memory Instructions: Loading and Storing Data
Part 5: Load and Store Multiple
Part 6: Conditional Execution and Branching
Part 7: Stack and Functions

To follow along with the examples, you will need an ARM based lab environment. If you don’t have an ARM device (like Raspberry Pi), you can set up your own lab environment in a Virtual Machine using QEMU and the Raspberry Pi distro by following this tutorial. If you are not familiar with basic debugging with GDB, you can get the basics in this tutorial. In this tutorial, the focus will be on ARM 32-bit, and the examples are compiled on an ARMv6.

Why ARM?

This tutorial is generally for people who want to learn the basics of ARM assembly. Especially for those of you who are interested in exploit writing on the ARM platform. You might have already noticed that ARM processors are everywhere around you. When I look around me, I can count far more devices that feature an ARM processor in my house than Intel processors. This includes phones, routers, and not to forget the IoT devices that seem to explode in sales these days. That said, the ARM processor has become one of the most widespread CPU cores in the world. Which brings us to the fact that like PCs, IoT devices are susceptible to improper input validation abuse such as buffer overflows. Given the widespread usage of ARM based devices and the potential for misuse, attacks on these devices have become much more common.

Yet, we have more experts specialized in x86 security research than we have for ARM, although ARM assembly language is perhaps the easiest assembly language in widespread use. So, why aren’t more people focusing on ARM? Perhaps because there are more learning resources out there covering exploitation on Intel than there are for ARM. Just think about the great tutorials on Intel x86 Exploit writing by Fuzzy Security or the Corelan Team – Guidelines like these help people interested in this specific area to get practical knowledge and the inspiration to learn beyond what is covered in those tutorials. If you are interested in x86 exploit writing, the Corelan and Fuzzysec tutorials are your perfect starting point. In this tutorial series here, we will focus on assembly basics and exploit writing on ARM.

Don’t forget to follow Azeria on Twitter, or her RSS Feed.


PS: She recently posted an really cool cheatsheet: Assembly Basics Cheatsheet. I’m going to use it to lobby (myself) for a pair of 32″ monitors so I can enlarge it on one screen and have a non-scrolling display. (Suggestions on the monitors?)

Flight rules for git – How to Distinguish Between Astronauts and Programmers

Thursday, November 9th, 2017

Flight rules for git by Kate Hudson.

From the post:

What are “flight rules”?

A guide for astronauts (now, programmers using git) about what to do when things go wrong.

Flight Rules are the hard-earned body of knowledge recorded in manuals that list, step-by-step, what to do if X occurs, and why. Essentially, they are extremely detailed, scenario-specific standard operating procedures. […]

NASA has been capturing our missteps, disasters and solutions since the early 1960s, when Mercury-era ground teams first started gathering “lessons learned” into a compendium that now lists thousands of problematic situations, from engine failure to busted hatch handles to computer glitches, and their solutions.

— Chris Hadfield, An Astronaut’s Guide to Life.

Hudson devises an easy test to distinguish between astronauts and programmers:

Astronauts – missteps, disasters and solutions are written down.

Programmers – missteps, disasters and solutions are programmer/sysadmin lore.

With Usenet and Stackover, you can argue improvement by programmers but it’s hardly been systematic. Even so it depends on a “good” query returning few enough “hits” to be useful.

Hudson is capturing “flight rules” for git.

Act like an astronaut and write down your missteps, disasters and solutions.

NASA made it to the moon and beyond by writing things down.

Who knows?

Writing down software missteps, disasters and solutions may help render all systems transparent, willingly or not.

SciPy 1.0.0! [Awaiting Your Commands]

Thursday, October 26th, 2017

SciPy 1.0.0

From the webpage:

We are extremely pleased to announce the release of SciPy 1.0, 16 years after version 0.1 saw the light of day. It has been a long, productive journey to get here, and we anticipate many more exciting new features and releases in the future.

Why 1.0 now?

A version number should reflect the maturity of a project – and SciPy was a mature and stable library that is heavily used in production settings for a long time already. From that perspective, the 1.0 version number is long overdue.

Some key project goals, both technical (e.g. Windows wheels and continuous integration) and organisational (a governance structure, code of conduct and a roadmap), have been achieved recently.

Many of us are a bit perfectionist, and therefore are reluctant to call something “1.0” because it may imply that it’s “finished” or “we are 100% happy with it”. This is normal for many open source projects, however that doesn’t make it right. We acknowledge to ourselves that it’s not perfect, and there are some dusty corners left (that will probably always be the case). Despite that, SciPy is extremely useful to its users, on average has high quality code and documentation, and gives the stability and backwards compatibility guarantees that a 1.0 label imply.

In case your hands are trembling too much to type in the URLs:

SciPy Cookbook

Scipy 1.0.0 Reference Guide, [HTML+zip], [PDF]

Like most tools, it isn’t weaponized until you apply it to data.


PS: If you want to get ahead of a co-worker, give them this URL: Don’t look, it’s a blog feed for SciPy. Sorry, you looked didn’t you?

How To Be A Wizard Programmer – Julia Evans @b0rk

Monday, October 9th, 2017

See at full scale.

Criticism: Julia does miss one important step!

Follow: Julia Evans @b0rk


Building Data Science with JS – Lifting the Curtain on Game Reviews

Saturday, October 7th, 2017

Building Data Science with JS by Tim Ermilov.

Three videos thus far:

Building Data Science with JS – Part 1 – Introduction

Building Data Science with JS – Part 2 – Microservices

Building Data Science with JS – Part 3 – RabbitMQ and OpenCritic microservice

Tim starts with the observation that the percentage of users assigning a score to a game isn’t very helpful. It tells you nothing about the content of the game and/or the person rating it.

In subject identity terms, each level, mighty, strong, weak, fair, collapses information about the game and a particular reviewer into a single summary subject. OpenCritic then displays the percent of reviewers who are represented by that summary subject.

The problem with the summary subject is that one critic may have down rated the game for poor content, another for sexism and still another for bad graphics. But a user only knows for reasons unknown, a critic whose past behavior is unknown, evaluated unknown content and assigned it a rating.

A user could read all the reviews, study the history of each reviewer, along with the other movies they have evaluated, but Ermilov proposes a more efficient means to peak behind the curtain of game ratings. (part 1)

In part 2, Ermilov designs a microservice based application to extract, process and display game reviews.

If you thought the first two parts were slow, you should enjoy Part 3. 😉 Ermilov speeds through a number of resources, documents, JS libraries, not to mention his source code for the project. You are likely to hit pause during this video.

Some links you will find helpful for Part 3:

AMQP 0-9-1 library and client for Node.JS – Channel-oriented API reference

AMQP 0-9-1 library and client for Node.JS (Github)

Microwork – simple creation of distributed scalable microservices in node.js with RabbitMQ (simplifies use of AMQP)

node-unfluff – Automatically extract body content (and other cool stuff) from an html document


RabbitMQ. (Recommends looking at the RabbitMQ tutorials.)

Exploratory Data Analysis of Tropical Storms in R

Tuesday, September 26th, 2017

Exploratory Data Analysis of Tropical Storms in R by Scott Stoltzman.

From the post:

The disastrous impact of recent hurricanes, Harvey and Irma, generated a large influx of data within the online community. I was curious about the history of hurricanes and tropical storms so I found a data set on and started some basic Exploratory data analysis (EDA).

EDA is crucial to starting any project. Through EDA you can start to identify errors & inconsistencies in your data, find interesting patterns, see correlations and start to develop hypotheses to test. For most people, basic spreadsheets and charts are handy and provide a great place to start. They are an easy-to-use method to manipulate and visualize your data quickly. Data scientists may cringe at the idea of using a graphical user interface (GUI) to kick-off the EDA process but those tools are very effective and efficient when used properly. However, if you’re reading this, you’re probably trying to take EDA to the next level. The best way to learn is to get your hands dirty, let’s get started.

The original source of the data was can be found at

Great walk through on exploratory data analysis.

Everyone talks about the weather but did you know there is a forty (40) year climate lag between cause and effect?

The human impact on the environment today, won’t be felt for another forty (40) years.

Can to predict the impact of a hurricane in 2057?

Some other data/analysis resources on hurricanes, Climate Prediction Center, Hurricane Forecast Computer Models, National Hurricane Center.

PS: Is a Category 6 Hurricane Possible? by Brian Donegan is an interesting discussion on going beyond category 5 for hurricanes. For reference on speeds, see: Fujita Scale (tornadoes).

MIT License Wins Converts (some anyway)

Friday, September 22nd, 2017

Relicensing React, Jest, Flow, and Immutable.js by Adam Wolff.

From the post:

Next week, we are going to relicense our open source projects React, Jest, Flow, and Immutable.js under the MIT license. We’re relicensing these projects because React is the foundation of a broad ecosystem of open source software for the web, and we don’t want to hold back forward progress for nontechnical reasons.

This decision comes after several weeks of disappointment and uncertainty for our community. Although we still believe our BSD + Patents license provides some benefits to users of our projects, we acknowledge that we failed to decisively convince this community.

In the wake of uncertainty about our license, we know that many teams went through the process of selecting an alternative library to React. We’re sorry for the churn. We don’t expect to win these teams back by making this change, but we do want to leave the door open. Friendly cooperation and competition in this space pushes us all forward, and we want to participate fully.

This shift naturally raises questions about the rest of Facebook’s open source projects. Many of our popular projects will keep the BSD + Patents license for now. We’re evaluating those projects’ licenses too, but each project is different and alternative licensing options will depend on a variety of factors.

We’ll include the license updates with React 16’s release next week. We’ve been working on React 16 for over a year, and we’ve completely rewritten its internals in order to unlock powerful features that will benefit everyone building user interfaces at scale. We’ll share more soon about how we rewrote React, and we hope that our work will inspire developers everywhere, whether they use React or not. We’re looking forward to putting this license discussion behind us and getting back to what we care about most: shipping great products.

Since I bang on about Facebook‘s 24×7 censorship and shaping of your worldview, it’s only fair to mention when they make a good choice.

It in no way excuses or justifies their ongoing offenses against the public but it’s some evidence that decent people remain employed at Facebook.

With any luck, the decent insiders will wrest control of Facebook away from its government toadies and collaborators.


Monday, September 18th, 2017

RStartHere by Garrett Grolemund.

R packages organized by their role in data science:

This is very cool! Use and share!

@rstudio Cheatsheets Now B&W Printer Friendly

Saturday, September 9th, 2017

Mara Averick, @dataandme, tweets:

All the @rstudio Cheatsheets have been B&W printer-friendlier-ized

It’s a small thing but appreciated when documentation is B&W friendly.

PS: The @rstudio cheatsheets are also good examples layout and clarity.

The International Conference on Functional Programming – 2017

Tuesday, September 5th, 2017

The International Conference on Functional Programming – 2017 – Papers

If you are on the Gulf or East coast of the United States, take this opportunity to download papers to read following land fall of Irma.

You may not have Internet service but if you have printed several papers out as emergency preparedness, you won’t be at a loss for reading materials.

I’ve been in the impact zone of several hurricanes and while reading materials don’t make repairs go any faster, they do help pass the time.

Reinventing Wheels with No Wheel Experience

Friday, June 30th, 2017

Rob Graham, @ErrataRob, captured an essential truth when he tweeted:

Wheel re-invention is inherent every new programming language, every new library, and no doubt, nearly every new program.

How much “wheel experience” every programmer has across the breath of software vulnerabilities?

Hard to imagine meaningful numbers on the “wheel experience” of programmers in general but vulnerability reports make it clear either “wheel experience” is lacking or the lesson didn’t stick. Your call.

Vulnerabilities may occur in any release so standard practice is to check every release, however small. Have your results independently verified by trusted others.

PS: For the details on systemd, see: Sergey Bratus and the systemd thread.

You Are Not Google (Blasphemy I Know, But He Said It, Not Me)

Thursday, June 8th, 2017

You Are Not Google by Ozan Onay.

From the post:

Software engineers go crazy for the most ridiculous things. We like to think that we’re hyper-rational, but when we have to choose a technology, we end up in a kind of frenzy — bouncing from one person’s Hacker News comment to another’s blog post until, in a stupor, we float helplessly toward the brightest light and lay prone in front of it, oblivious to what we were looking for in the first place.

This is not how rational people make decisions, but it is how software engineers decide to use MapReduce.

Spoiler: Onay will also say you are not Amazon or LinkedIn.

Just so you know and can prepare for the ego shock.

Great read that invokes Poyla’s First Principle:

Understand the Problem

This seems so obvious that it is often not even mentioned, yet students are often stymied in their efforts to solve problems simply because they don’t understand it fully, or even in part. Polya taught teachers to ask students questions such as:

  • Do you understand all the words used in stating the problem?
  • What are you asked to find or show?
  • Can you restate the problem in your own words?
  • Can you think of a picture or a diagram that might help you understand the problem?
  • Is there enough information to enable you to find a solution?

Onay coins a mnemonic for you to apply and points to additional reading.


PS: Caution: Understanding a problem can cast doubt on otherwise successful proposals for funding. Your call.

Copy-n-Paste Security Alert!

Wednesday, June 7th, 2017

Security: The Dangers Of Copying And Pasting R Code.

From the post:

Most of the time when we stumble across a code snippet online, we often blindly copy and paste it into the R console. I suspect almost everyone does this. After all, what’s the harm?

The post illustrates how innocent appearing R code can conceal unhappy surprises!

Concealment isn’t limited to R code.

Any CSS controlled display is capable of concealing code for you to copy-n-paste into a console, terminal window, script or program.

Endless possibilities for HTML pages/emails with code + a “little something extra.”

What are your copy-n-paste practices?

C Reference Manual (D.M. Richie, 1974)

Tuesday, May 23rd, 2017

C Reference Manual (D.M. Richie, 1974)

I mention the C Reference Manual, now forty-three (43) years old, as encouragement to write good documentation.

It may have a longer life than you ever expected!

For example, in 1974 Richie writes:

2.2 Identifier (Names)

An identifier is a sequence of letters and digits: the first character must be alphabetic.

Which we find replicated years later in ISO/IEC 8879 : 1986 (SGML):

4.198 name: A name token whose first character is a name start character.

4.201 name start character: A character that can begin a name: letters and others designated by the concrete syntax.

And in production [53]:

name start character =
LC Letter \
UC Letter \

Where Figure 1 of 9.2.1 SGML Character defines LC Letter as a-z, UC Letter as A-Z, LCNMSTRT as (none), UCNMSTRT as (none), in the concrete syntax.

And in 1997, the letter vs. digit distinction, finds its way into Extensible Markup Language (XML) 1.0.

[4] NameChar ::= Letter | Digit | ‘.’ | ‘-‘ | ‘_’ | ‘:’ | CombiningChar | Extender
[5] Name ::= (Letter | ‘_’ | ‘:’) (NameChar)*

“Letter” is a link to a production referencing all the qualifying Unicode characters which is too long to include here.

What started off as an arbitrary choice, “alphabetic” characters as name start characters in 1974, is picked up some 12 years later (1986) in ISO/IEC 8879 (SGML), both of which were bound by a restricted character set.

When the opportunity came to abandon the letter versus digit distinction in name start characters (XML 1.0), the result is a larger character repertoire for name start characters, but digits continue as second-class citizens.

Can you point to an explanation why Richie preferred alphabetic characters over digits for name start characters?

ARM Releases Machine Readable Architecture Specification (Intel?)

Saturday, April 22nd, 2017

ARM Releases Machine Readable Architecture Specification by Alastair Reid.

From the post:

Today ARM released version 8.2 of the ARM v8-A processor specification in machine readable form. This specification describes almost all of the architecture: instructions, page table walks, taking interrupts, taking synchronous exceptions such as page faults, taking asynchronous exceptions such as bus faults, user mode, system mode, hypervisor mode, secure mode, debug mode. It details all the instruction formats and system register formats. The semantics is written in ARM’s ASL Specification Language so it is all executable and has been tested very thoroughly using the same architecture conformance tests that ARM uses to test its processors (See my paper “Trustworthy Specifications of ARM v8-A and v8-M System Level Architecture”.)

The specification is being released in three sets of XML files:

  • The System Register Specification consists of an XML file for each system register in the architecture. For each register, the XML details all the fields within the register, how to access the register and which privilege levels can access the register.
  • The AArch64 Specification consists of an XML file for each instruction in the 64-bit architecture. For each instruction, there is the encoding diagram for the instruction, ASL code for decoding the instruction, ASL code for executing the instruction and any supporting code needed to execute the instruction and the decode tree for finding the instruction corresponding to a given bit-pattern. This also contains the ASL code for the system architecture: page table walks, exceptions, debug, etc.
  • The AArch32 Specification is similar to the AArch64 specification: it contains encoding diagrams, decode trees, decode/execute ASL code and supporting ASL code.

Alastair provides starting points for use of this material by outlining his prior uses of the same.

Raises the question why an equivalent machine readable data set isn’t available for Intel® 64 and IA-32 Architectures? (PDF manuals)

The data is there, but not in a machine readable format.

Anyone know why Intel doesn’t provide the same convenience?