Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

December 2, 2018

Programming Language Foundations in Agda [Hackers Fear Not!]

Filed under: Agda,Computer Science,Cybersecurity,Hacking,Programming,Proof Theory — Patrick Durusau @ 11:47 am

Programming Language Foundations in Agda by Philip Wadler and Wen Kokke.

From the preface:

The most profound connection between logic and computation is a pun. The doctrine of Propositions as Types asserts that a certain kind of formal structure may be read in two ways: either as a proposition in logic or as a type in computing. Further, a related structure may be read as either the proof of the proposition or as a programme of the corresponding type. Further still, simplification of proofs corresponds to evaluation of programs.

Accordingly, the title of this book also has two readings. It may be parsed as “(Programming Language) Foundations in Agda” or “Programming (Language Foundations) in Agda” — the specifications we will write in the proof assistant Agda both describe programming languages and are themselves programmes.

The book is aimed at students in the last year of an undergraduate honours programme or the first year of a master or doctorate degree. It aims to teach the fundamentals of operational semantics of programming languages, with simply-typed lambda calculus as the central example. The textbook is written as a literate script in Agda. The hope is that using a proof assistant will make the development more concrete and accessible to students, and give them rapid feedback to find and correct misapprehensions.

The book is broken into two parts. The first part, Logical Foundations, develops the needed formalisms. The second part, Programming Language Foundations, introduces basic methods of operational semantics.

Hackers should attend closely to Wadler and Kokke’s text to improve their own tools. The advantages of type-dependent programming are recited by Andrew Hynes in Why you should care about dependently typed programming and I won’t repeat them here.

Hynes also reassures hackers (perhaps not his intent) that a wave of type-dependent programming is not on the near horizon saying:

So we’ve got these types that act as self-documenting proofs that functionality works, add clarity, add confidence our code works as well as runs. And, more than that, they make sense. Why didn’t we have these before? The short answer is, they’re a new concept, they’re not in every language, a large amount of people don’t know they exist or that this is even possible. Also, there are those I mentioned earlier, who hear about its use in research and dismiss it as purely for that purpose (let’s not forget that people write papers about languages like C and [Idealized] Algol, too). The fact I felt the need to write this article extolling their virtues should be proof enough of that.

Like object orientation and other ideas before it, it may take a while before this idea seeps down into being taught at universities and seen as standard. Functional programming has only just entered this space. The main stop-gap right now is this knowledge, and it’s the same reason you can’t snap your fingers together and have a bunch of Java devs who have never seen Haskell before writing perfect Haskell day one. Dependently typed programming is still a new concept, but that doesn’t mean you need to wait. Things we take for granted were new once, too.

I’m not arguing in favour of everybody in the world switching to a dependently typed language and doing everything possible dependently typed, that would be silly, and it encourages misuse. I am arguing in favour of, whenever possible (e.g. if you’re already using Haskell or similar) perhaps thinking whether dependent types suit what you’re writing. Chances are, there’s probably something they do suit very well indeed. They’re a truly fantastic tool and I’d argue that they will get better as time goes on due to way architecture will evolve. I think we’ll be seeing a lot more of them in the future. (emphasis in original)

Vulnerabilities have been, are and will continue to be etched into silicon. Vulnerabilities exist in decades of code and in the code written to secure it. Silicon and code that will still be running as type-dependent programming slowly seeps into the mainstream.

Hackers should benefit from and not fear type-dependent programming!

November 15, 2018

Before You Make a Thing [Technology and Society]

Filed under: Computer Science,Ethics,Politics — Patrick Durusau @ 10:55 am

Before You Make a Thing: some tips for approaching technology and society by Jentery Sayers.

From the webpage:

This is a guide for Technology and Society 200 (Fall 2018; 60 undergraduate students) at the University of Victoria. It consists of three point-form lists. The first is a series of theories and concepts drawn from assigned readings, the second is a rundown of practices corresponding with projects we studied, and the third itemizes prototyping techniques conducted in the course. All are intended to distill material from the term and communicate its relevance to project design and development. Some contradiction is inevitable. Thank you for your patience.

An extraordinary summary of the Prototyping Pasts + Futures class, whose description reads:

An offering in the Technology and Society minor at UVic, this course is about the entanglement of Western technologies with society and culture. We’ll examine some histories of these entanglements, discuss their effects today, and also speculate about their trajectories. One important question will persist throughout the term: How can and should we intervene in technologies as practices? Rather than treating technologies as tools we use or objects we examine from the outside, we’ll prototype with and through them as modes of inquiry. You’ll turn patents into 3-D forms, compose and implement use scenarios, “datify” old tech, and imagine a device you want to see in the world. You’ll document your research and development process along the way, reflect on what you learned, present your prototypes and findings, and also build a vocabulary of keywords for technology and society. I will not assume that you’re familiar with fields such as science and technology studies, media studies, critical design, or experimental art, and the prototyping exercises will rely on low-tech approaches. Technical competency required: know how to send an email.

Deeply impressive summary of the “Theories and Concepts,” “Practices,” and “Prototyping Techniques” from Prototyping Pasts + Futures.

Whether you want a benign impact of your technology or are looking to put a fine edge on it, this is the resource for you!

Not to mention learning a great deal that will help you better communicate to clients the probable outcomes of their requests.

Looking forward to spending some serious time with these materials.

Enjoy!

September 20, 2018

Software disenchantment (a must read)

Filed under: Computer Science,Design,Programming,Software,Software Engineering — Patrick Durusau @ 3:34 pm

Software disenchantment by Nikita Prokopov.

From the post:


Windows 95 was 30Mb. Today we have web pages heavier than that! Windows 10 is 4Gb, which is 133 times as big. But is it 133 times as superior? I mean, functionally they are basically the same. Yes, we have Cortana, but I doubt it takes 3970 Mb. But whatever Windows 10 is, is Android really 150% of that?

Google keyboard app routinely eats 150 Mb. Is an app that draws 30 keys on a screen really five times more complex than the whole Windows 95? Google app, which is basically just a package for Google Web Search, is 350 Mb! Google Play Services, which I do not use (I don’t buy books, music or videos there)—300 Mb that just sit there and which I’m unable to delete.

Yep, that and more. Brim full of hurtful remarks but also suggestions for a leaner, faster and more effective future.

Prokopov doesn’t mention malware but “ratio of bugs per line of code” has a great summary of various estimates of bugs to lines of code.

Government programmers and their contractors should write as much bloated code as their funding will support.

Programmers working in the public interest, should read Prokopov deeply and follow his advice.

August 3, 2018

Hints for Computer System Design

Filed under: Computer Science,Design — Patrick Durusau @ 7:24 pm

Hints for Computer System Design by Butler W. Lampson (1983)

Abstract:

Studying the design and implementation of a number of computer has led to some general hints for system design. They are described here and illustrated by many examples, ranging from hardware such as the Alto and the Dorado to application programs such as Bravo and Star.

Figure 1 is the most common quote you will see:

Figure 1 is a great summary, but don’t cheat yourself by using it in place of reading the full article. All of those slogans have a context of origin and usage.

I saw this in a tweet by Joe Duffy, who says he reads it at least once a year. Not a bad plan.

June 28, 2018

The Arcane Algorithm Archive

Filed under: Algorithms,Computer Science,Volunteer — Patrick Durusau @ 1:46 pm

The Arcane Algorithm Archive

From the webpage:

The Arcane Algorithm Archive is a collaborative effort to create a guide for all important algorithms in all languages.

This goal is obviously too ambitious for a book of any size, but it is a great project to learn from and work on and will hopefully become an incredible resource for programmers in the future.

The book can be found here: https://www.algorithm-archive.org/.

The github repository can be found here: https://github.com/algorithm-archivists/algorithm-archive.

Most algorithms have been covered on the youtube channel LeiosOS: https://www.youtube.com/user/LeiosOS and livecoded on Twitch: https://www.twitch.tv/simuleios.

If you would like to communicate more directly, please feel free to go to our discord: https://discord.gg/Pr2E9S6.

Note that the this project is is essentially a book about algorithms collaboratively written by an online community.

Fortunately, there are a lot of algorithms out there, which means that there is a lot of content material available.

Unfortunately, this means that we will probably never cover every algorithm ever created and instead need to focus on what the community sees as useful and necessary.

That said, we'll still cover a few algorithms for fun that have very little, if any practical purpose.

If you would like to contribute, feel free to go to any chapter with code associated with it and implement that algorithm in your favorite language, and then submit the code via pull request, following the submission guidelines found in chapters/how_to_contribute.md (or here if you are reading this on gitbook).

Hopefully, this project will grow and allow individuals to learn about and try their hand at implementing different algorithms for fun and (potentially) useful projects. If nothing else, it will be an enjoyable adventure for our community.

Thanks for reading and let me know if there's anything wrong or if you want to see something implemented in the future!

If you are looking for a volunteer opportunity where your contribution will be noticeable (not a large bulk of existing contributions), you have landed at the right place!

I don’t know a workable definition for “all algorithms” or “all languages” so feel free to contribute what interests you.

This being a mid-term election year in the U.S., I’m sure we are all going to need extra distractions in the coming months. Enjoy!

February 17, 2018

Evidence for Power Laws – “…I work scientifically!”

Filed under: Computer Science,Networks,Scale-Free — Patrick Durusau @ 9:28 pm

Scant Evidence of Power Laws Found in Real-World Networks by Erica Klarreich.

From the post:

A paper posted online last month has reignited a debate about one of the oldest, most startling claims in the modern era of network science: the proposition that most complex networks in the real world — from the World Wide Web to interacting proteins in a cell — are “scale-free.” Roughly speaking, that means that a few of their nodes should have many more connections than others, following a mathematical formula called a power law, so that there’s no one scale that characterizes the network.

Purely random networks do not obey power laws, so when the early proponents of the scale-free paradigm started seeing power laws in real-world networks in the late 1990s, they viewed them as evidence of a universal organizing principle underlying the formation of these diverse networks. The architecture of scale-freeness, researchers argued, could provide insight into fundamental questions such as how likely a virus is to cause an epidemic, or how easily hackers can disable a network.

An informative and highly entertaining read that reminds me of an exchange between in The Never Ending Story between Atreyu and Engywook.

Engywook’s “scientific specie-ality” is the Southern Oracle. From the transcript:

Atreyu: Have you ever been to the Southern Oracle?

Engywook: Eh… what do YOU think? I work scientifically!

In the context of the movie, Engywook’s answer is deeply ambiguous.

Where do you land on the power law question?

February 5, 2018

Unfairness By Algorithm

Filed under: Bias,Computer Science — Patrick Durusau @ 5:40 pm

Unfairness By Algorithm: Distilling the Harms of Automated Decision-Making by Lauren Smith.

From the post:

Analysis of personal data can be used to improve services, advance research, and combat discrimination. However, such analysis can also create valid concerns about differential treatment of individuals or harmful impacts on vulnerable communities. These concerns can be amplified when automated decision-making uses sensitive data (such as race, gender, or familial status), impacts protected classes, or affects individuals’ eligibility for housing, employment, or other core services. When seeking to identify harms, it is important to appreciate the context of interactions between individuals, companies, and governments—including the benefits provided by automated decision-making frameworks, and the fallibility of human decision-making.

Recent discussions have highlighted legal and ethical issues raised by the use of sensitive data for hiring, policing, benefits determinations, marketing, and other purposes. These conversations can become mired in definitional challenges that make progress towards solutions difficult. There are few easy ways to navigate these issues, but if stakeholders hold frank discussions, we can do more to promote fairness, encourage responsible data use, and combat discrimination.

To facilitate these discussions, the Future of Privacy Forum (FPF) attempted to identify, articulate, and categorize the types of harm that may result from automated decision-making. To inform this effort, FPF reviewed leading books, articles, and advocacy pieces on the topic of algorithmic discrimination. We distilled both the harms and potential mitigation strategies identified in the literature into two charts. We hope you will suggest revisions, identify challenges, and help improve the document by contacting lsmith@fpf.org. In addition to presenting this document for consideration for the FTC Informational Injury workshop, we anticipate it will be useful in assessing fairness, transparency and accountability for artificial intelligence, as well as methodologies to assess impacts on rights and freedoms under the EU General Data Protection Regulation.

The primary attraction are two tables, Potential Harms from Automated Decision-Making and Potential Mitigation Sets.

Take the tables as a starting point for analysis.

Some “unfair” practices, such as increased auto insurance prices for night-shift workers, which results in differential access to insurance, is an actuarial question. Insurers are not public charities and can legally discriminate based on perceived risk.

January 31, 2018

GraphDBLP [“dblp computer science bibliography” as a graph]

Filed under: Computer Science,Graphs,Neo4j,Networks — Patrick Durusau @ 3:30 pm

GraphDBLP: a system for analysing networks of computer scientists through graph databases by Mario Mezzanzanica, et al.

Abstract:

This paper presents GraphDBLP, a system that models the DBLP bibliography as a graph database for performing graph-based queries and social network analyses. GraphDBLP also enriches the DBLP data through semantic keyword similarities computed via word-embedding. In this paper, we discuss how the system was formalized as a multi-graph, and how similarity relations were identified through word2vec. We also provide three meaningful queries for exploring the DBLP community to (i) investigate author profiles by analysing their publication records; (ii) identify the most prolific authors on a given topic, and (iii) perform social network analyses over the whole community. To date, GraphDBLP contains 5+ million nodes and 24+ million relationships, enabling users to explore the DBLP data by referencing more than 3.3 million publications, 1.7 million authors, and more than 5 thousand publication venues. Through the use of word-embedding, more than 7.5 thousand keywords and related similarity values were collected. GraphDBLP was implemented on top of the Neo4j graph database. The whole dataset and the source code are publicly available to foster the improvement of GraphDBLP in the whole computer science community.

Although the article is behind a paywall, GraphDBLP as a tool is not! https://github.com/fabiomercorio/GraphDBLP.

From the webpage:

GraphDBLP is a tool that models the DBLP bibliography as a graph database for performing graph-based queries and social network analyses.

GraphDBLP also enriches the DBLP data through semantic keyword similarities computed via word-embedding.

GraphDBLP provides to users three meaningful queries for exploring the DBLP community:

  1. investigate author profiles by analysing their publication records;
  2. identify the most prolific authors on a given topic;
  3. perform social network analyses over the whole community;
  4. perform shortest-paths over DBLP (e.g., the shortest-path between authors, the analysis of co-author networks, etc.)

… (emphasis in original)

Sorry to see author, title, venue, publication, keyword all as flat strings but that’s not uncommon. Disappointing but not uncommon.

Viewing these flat strings as parts of structured representatives will be in addition to this default.

Not to minimize the importance of improving the usefulness of the dblp, but imagine integrating the GraphDBLP into your local library system. Without a massive data mapping project. That’s what lies just beyond the reach of this data project.

January 22, 2018

Don Knuth Needs Your Help

Filed under: Computer Science,Programming — Patrick Durusau @ 9:04 pm

Donald Knuth Turns 80, Seeks Problem-Solvers For TAOCP

From the post:

An anonymous reader writes:

When 24-year-old Donald Knuth began writing The Art of Computer Programming, he had no idea that he’d still be working on it 56 years later. This month he also celebrated his 80th birthday in Sweden with the world premier of Knuth’s Fantasia Apocalyptica, a multimedia work for pipe organ and video based on the bible’s Book of Revelations, which Knuth describes as “50 years in the making.”

But Knuth also points to the recent publication of “one of the most important sections of The Art of Computer Programming” in preliminary paperback form: Volume 4, Fascicle 6: Satisfiability. (“Given a Boolean function, can its variables be set to at least one pattern of 0s and 1 that will make the function true?”)

Here’s an excerpt from its back cover:

Revolutionary methods for solving such problems emerged at the beginning of the twenty-first century, and they’ve led to game-changing applications in industry. These so-called “SAT solvers” can now routinely find solutions to practical problems that involve millions of variables and were thought until very recently to be hopelessly difficult.

“in several noteworthy cases, nobody has yet pointed out any errors…” Knuth writes on his site, adding “I fear that the most probable hypothesis is that nobody has been sufficiently motivated to check these things out carefully as yet.” He’s uncomfortable printing a hardcover edition that hasn’t been fully vetted, and “I would like to enter here a plea for some readers to tell me explicitly, ‘Dear Don, I have read exercise N and its answer very carefully, and I believe that it is 100% correct,'” where N is one of the exercises listed on his web site.

Elsewhere he writes that two “pre-fascicles” — 5a and 5B — are also available for alpha-testing. “I’ve put them online primarily so that experts in the field can check the contents before I inflict them on a wider audience. But if you want to help debug them, please go right ahead.”

Do you have some other leisure project for 2018 that is more important?

😉

December 21, 2017

Weird machines, exploitability, and provable unexploitability

Filed under: Computer Science,Cybersecurity,Security,Vocabularies — Patrick Durusau @ 7:54 pm

Weird machines, exploitability, and provable unexploitability by Thomas Dullien (IEEE pre-print, to appear IEEE Transactions on Emerging Topics in Computing)

Abstract:

The concept of exploit is central to computer security, particularly in the context of memory corruptions. Yet, in spite of the centrality of the concept and voluminous descriptions of various exploitation techniques or countermeasures, a good theoretical framework for describing and reasoning about exploitation has not yet been put forward.

A body of concepts and folk theorems exists in the community of exploitation practitioners; unfortunately, these concepts are rarely written down or made sufficiently precise for people outside of this community to benefit from them.

This paper clarifies a number of these concepts, provides a clear definition of exploit, a clear definition of the concept of a weird machine, and how programming of a weird machine leads to exploitation. The papers also shows, somewhat counterintuitively, that it is feasible to design some software in a way that even powerful attackers – with the ability to corrupt memory once – cannot gain an advantage.

The approach in this paper is focused on memory corruptions. While it can be applied to many security vulnerabilities introduced by other programming mistakes, it does not address side channel attacks, protocol weaknesses, or security problems that are present by design.

A common vocabulary to bridge the gap between ‘Exploit practitioners’ (EPs) and academic researchers. Whether it will in fact bridge that gap remains to be seen. Even the attempt will prove to be useful.

Tracing the use/propagation of Dullien’s vocabulary across Google’s Project Zero reports and papers would provide a unique data set on the spread (or not) of a new vocabulary in computer science.

Not to mention being a way to map back into earlier literature with the newer vocabulary, via a topic map.

BTW, Dullien’s statement “is is feasible to design some software in a way that even powerful attackers … cannot gain an advantage,” is speculation and should not dampen your holiday spirits. (I root for the hare and not the hounds as a rule.)

December 9, 2017

Lisp at the Frontier of Computation

Filed under: Computer Science,Lisp,Quantum — Patrick Durusau @ 10:18 am

Abstract:

Since the 1950s, Lisp has been used to describe and calculate in cutting-edge fields like artificial intelligence, robotics, symbolic mathematics, and advanced optimizing compilers. It is no surprise that Lisp has also found relevance in quantum computation, both in academia and industry. Hosted at Rigetti Computing, a quantum computing startup in Berkeley, Robert Smith will provide a pragmatic view of the technical, sociological, and psychological aspects of working with an interdisciplinary team, writing Lisp, to build the next generation of technology resource: the quantum computer.

ABOUT THE SPEAKER: Robert has been using Lisp for over decade, and has been fortunate to work with and manage expert teams of Lisp programmers to build embedded fingerprint analysis systems, machine learning-based product recommendation software, metamaterial phased-array antennas, discrete differential geometric computer graphics software, and now quantum computers. As Director of Software Engineering, Robert is responsible for building the publicly available Rigetti Forest platform, powered by both a real quantum computer and one of the fastest single-node quantum computer simulators in the world.

Video notes mention “poor audio quality.” Not the best but clear and audible to me.

The coverage of the quantum computer work is great but mostly a general promotion of Lisp.

Important links:

Forest (beta) Forest provides development access to our 30-qubit simulator the Quantum Virtual Machine ™ and limited access to our quantum hardware systems for select partners. Workshop video plus numerous other resources.

A Practical Quantum Instruction Set Architecture by Robert S. Smith, Michael J. Curtis, William J. Zeng. (speaker plus two of his colleagues)

December 7, 2017

The Computer Science behind a modern distributed data store

Filed under: ArangoDB,Computer Science,Distributed Computing,Distributed Consistency — Patrick Durusau @ 1:34 pm

From the description:

What we see in the modern data store world is a race between different approaches to achieve a distributed and resilient storage of data. Every application needs a stateful layer which holds the data. There are at least three necessary ingredients which are everything else than trivial to combine and of course even more challenging when heading for an acceptable performance.

Over the past years there has been significant progress in respect in both the science and practical implementations of such data stores. In his talk Max Neunhöffer will introduce the audience to some of the needed ingredients, address the difficulties of their interplay and show four modern approaches of distributed open-source data stores.

Topics are:

  • Challenges in developing a distributed, resilient data store
  • Consensus, distributed transactions, distributed query optimization and execution
  • The inner workings of ArangoDB, Cassandra, Cockroach and RethinkDB

The talk will touch complex and difficult computer science, but will at the same time be accessible to and enjoyable by a wide range of developers.

I haven’t found the slides for this presentation but did stumble across ArangoDB Tech Talks and Slides.

Neunhöffer’s presentation will make you look at ArangoDB more closely.

November 20, 2017

So You Want to be a WIZARD [Spoiler Alert: It Requires Work]

Filed under: Computer Science,Programming — Patrick Durusau @ 9:28 am

So You Want to be a WIZARD by Julia Evans.

I avoid using terms like inspirational, transforming, etc. because it is so rare that software, projects, presentations merit merit those terms.

Today I am making an exception to that rule to say:

So You Want to be a Wizard by Julia Evans can transform your work in computer science.

Notice the use of “can” in that sentence. No guarantees because unlike many promised solutions, Julia says up front that hard work is required to use her suggestions successfully.

That’s right. If these methods don’t work for you it will be because you did not apply them. (full stop)

No guarantees you will get praise, promotions, recognition, etc., as a result of using Julia’s techniques, but you will be a wizard none the less.

One consolation is that wizards rarely notice back-biters, office sycophants, and a range of other toxic co-workers. They are too busy preparing themselves to answer the next technical issue that requires a wizard.

November 16, 2017

10 Papers Every Developer Should Read (At Least Twice) [With Hyperlinks]

Filed under: Computer Science,Programming — Patrick Durusau @ 4:27 pm

10 Papers Every Developer Should Read (At Least Twice) by Michael Feathers

Feathers omits hyperlinks for the 10 papers every developer should read, at least twice.

Hyperlinks eliminate searches by every reader, saving them time and load on their favorite search engine, not to mention providing access more quickly. Feathers’ list with hyperlinks follows.

Most are easy to read but some are rough going – they drop off into math after the first few pages. Take the math to tolerance and then move on. The ideas are the important thing.

See Feather’s post for his comments on each paper.

Even a shallow web composed of hyperlinks is better than no web at all.

November 9, 2017

A Primer for Computational Biology

Filed under: Bioinformatics,Biology,Computational Biology,Computer Science — Patrick Durusau @ 4:36 pm

A Primer for Computational Biology by Shawn T. O’Neil.

From the webpage:

A Primer for Computational Biology aims to provide life scientists and students the skills necessary for research in a data-rich world. The text covers accessing and using remote servers via the command-line, writing programs and pipelines for data analysis, and provides useful vocabulary for interdisciplinary work. The book is broken into three parts:

  1. Introduction to Unix/Linux: The command-line is the “natural environment” of scientific computing, and this part covers a wide range of topics, including logging in, working with files and directories, installing programs and writing scripts, and the powerful “pipe” operator for file and data manipulation.
  2. Programming in Python: Python is both a premier language for learning and a common choice in scientific software development. This part covers the basic concepts in programming (data types, if-statements and loops, functions) via examples of DNA-sequence analysis. This part also covers more complex subjects in software development such as objects and classes, modules, and APIs.
  3. Programming in R: The R language specializes in statistical data analysis, and is also quite useful for visualizing large datasets. This third part covers the basics of R as a programming language (data types, if-statements, functions, loops and when to use them) as well as techniques for large-scale, multi-test analyses. Other topics include S3 classes and data visualization with ggplot2.

Pass along to life scientists and students.

This isn’t the primer that separates the CS material from domain specific examples and prose. Adaptation to another domain is a question of re-writing.

I assume an adaptable primer wasn’t the author’s intention and so that isn’t a criticism but an observation that basic material is written over and over again, needlessly.

I first saw this in a tweet by Christophe Lalanne.

April 24, 2017

3 Reasons to Read: Algorithms to Live By

Filed under: Algorithms,Computer Science,Intelligence — Patrick Durusau @ 7:51 pm

How Algorithms can untangle Human Questions. Interview with Brian Christian by Roberto V. Zican.

The entire interview is worth your study but the first question and answer establish why you should read Algorithms to Live By:

Q1. You have worked with cognitive scientist Tom Griffiths (professor of psy­chol­ogy and cognitive science at UC Berkeley) to show how algorithms used by computers can also untangle very human questions. What are the main lessons learned from such a joint work?

Brian Christian: I think ultimately there are three sets of insights that come out of the exploration of human decision-making from the perspective of computer science.

The first, quite simply, is that identifying the parallels between the problems we face in everyday life and some of the canonical problems of computer science can give us explicit strategies for real-life situations. So-called “explore/exploit” algorithms tell us when to go to our favorite restaurant and when to try something new; caching algorithms suggest — counterintuitively — that the messy pile of papers on your desk may in fact be the optimal structure for that information.

Second is that even in cases where there is no straightforward algorithm or easy answer, computer science offers us both a vocabulary for making sense of the problem, and strategies — using randomness, relaxing constraints — for making headway even when we can’t guarantee we’ll get the right answer every time.

Lastly and most broadly, computer science offers us a radically different picture of rationality than the one we’re used to seeing in, say, behavioral economics, where humans are portrayed as error-prone and irrational. Computer science shows us that being rational means taking the costs of computation — the costs of decision-making itself — into account. This leads to a much more human, and much more achievable picture of rationality: one that includes making mistakes and taking chances.
… (emphasis in original)

After the 2016 U.S. presidential election, I thought the verdict that humans are error-prone and irrational was unassailable.

Looking forward to the use of a human constructed lens (computer science) to view “human questions.” There are answers to “human questions” baked into computer science so watching the authors unpack those will be an interesting read. (Waiting for my copy to arrive.)

Just so you know, the Picador edition is a reprint. It was originally published by William Collins, 21/04/2016 in hardcover, see: Algorithms to Live By, a short review by Roberto Zicari, October 24, 2016.

April 7, 2017

Sci Hub It!

Filed under: Computer Science,Open Access — Patrick Durusau @ 1:51 pm

Sci Hub It!

Simple add-on to make it easier to use Sci-Hub.

If you aren’t already using this plug-in for Firefox you should be.

Quite handy!

Enjoy!

March 13, 2017

Notes to (NUS) Computer Science Freshmen…

Filed under: Books,Computer Science,Programming — Patrick Durusau @ 4:07 pm

Notes to (NUS) Computer Science Freshmen, From The Future

From the intro:

Early into the AY12/13 academic year, Prof Tay Yong Chiang organized a supper for Computer Science freshmen at Tembusu College. The bunch of seniors who were gathered there put together a document for NUS computing freshmen. This is that document.

Feel free to create a pull request to edit or add to it, and share it with other freshmen you know.

There is one sad note:


The Art of Computer Programming (a review of everything in Computer Science; pretty much nobody, save Knuth, has finished reading this)

When you think about the amount of time Knuth has spent researching, writing and editing The Art of Computer Programming (TAOCP), it doesn’t sound unreasonable to expect others, a significant number of others, to have read it.

Any online reading groups focused on TAOCP?

January 14, 2017

New Spaceship Speed in Conway’s Game of Life

Filed under: Cellular Automata,Computer Science — Patrick Durusau @ 5:09 pm

New Spaceship Speed in Conway’s Game of Life by Alexy Nigin.

From the post:

In this article, I assume that you have basic familiarity with Conway’s Game of Life. If this is not the case, you can try reading an explanatory article but you will still struggle to understand much of the following content.

The day before yesterday ConwayLife.com forums saw a new member named zdr. When we the lifenthusiasts meet a newcomer, we expect to see things like “brand new” 30-cell 700-gen methuselah and then have to explain why it is not notable. However, what zdr showed us made our jaws drop.

It was a 28-cell c/10 orthogonal spaceship:

An animated image of the spaceship

… (emphasis in the original)

The mentioned introduction isn’t sufficient to digest the material in this post.

There is a wealth of material available on cellular automata (the Game of Life is one).

LifeWiki is one and Complex Cellular Automata is another. While not exhaustive of all there is to know about cellular automata, familiarity with take some time and skill.

Still, I offer this as encouragement that fundamental discoveries remain to be made.

But if and only if you reject conventional wisdom that prevents you from looking.

January 13, 2017

D-Wave Just Open-Sourced Quantum Computing [DC Beltway Parking Lot Distraction]

Filed under: Computer Science,Quantum — Patrick Durusau @ 9:10 pm

D-Wave Just Open-Sourced Quantum Computing by Dom Galeon.

D-Wave has just released a welcome distraction for CS types sitting in the DC Beltway Parking Lot on January 20-21, 2017. (I assuming you brought extra batteries for your laptop.) After you run out of gas, your laptop will be running on battery power alone.

Just remember to grab a copy of Qbsolv before you leave for the tailgate/parking lot party on the Beltway.

A software tool known as Qbsolv allows developers to program D-Wave’s quantum computers even without knowledge of quantum computing. It has already made it possible for D-Wave to work with a bunch of partners, but the company wants more. “D-Wave is driving the hardware forward,” Bo Ewald, president of D-Wave International, told Wired. “But we need more smart people thinking about applications, and another set thinking about software tools.”

To that end, D-Wave has open-sourced Qbsolv, making it possible for anyone to freely share and modify the software. D-Wave hopes to build an open source community of sorts for quantum computing. Of course, to actually run this software, you’d need access to a piece of hardware that uses quantum particles, like one of D-Wave’s quantum computers. However, for the many who don’t have that access, the company is making it possible to download a D-Wave simulator that can be used to test Qbsolv on other types of computers.

This open-source Qbsolv joins an already-existing free software tool called Qmasm, which was developed by one of Qbsolv’s first users, Scott Pakin of Los Alamos National Laboratory. “Not everyone in the computer science community realizes the potential impact of quantum computing,” said mathematician Fred Glover, who’s been working with Qbsolv. “Qbsolv offers a tool that can make this impact graphically visible, by getting researchers and practitioners involved in charting the future directions of quantum computing developments.”

D-Wave’s machines might still be limited to solving optimization problems, but it’s a good place to start with quantum computers. Together with D-Wave, IBM has managed to develop its own working quantum computer in 2000, while Google teamed up with NASA to make their own. Eventually, we’ll have a quantum computer that’s capable of performing all kinds of advanced computing problems, and now you can help make that happen.

From the github page:

qbsolv is a metaheuristic or partitioning solver that solves a potentially large quadratic unconstrained binary optimization (QUBO) problem by splitting it into pieces that are solved either on a D-Wave system or via a classical tabu solver.

The phrase, “…might still be limited to solving optimization problems…” isn’t as limiting as it might appear.

A recent (2014) survey of quadratic unconstrained binary optimization (QUBO), The Unconstrained Binary Quadratic Programming Problem: A Survey runs some thirty-three pages and should keep you occupied however long you sit on the DC Beltway.

From page 10 of the survey:


Kochenberger, Glover, Alidaee, and Wang (2005) examine the use of UBQP as a tool for clustering microarray data into groups with high degrees of similarity.

Where I read one person’s “similarity” to be another person’s test of “subject identity.”

PS: Enjoy the DC Beltway. You may never see it motionless ever again.

January 1, 2017

OpenTOC (ACM SIG Proceedings – Free)

Filed under: Computer Science,Open Access — Patrick Durusau @ 8:59 pm

OpenTOC

From the webpage:

ACM OpenTOC is a unique service that enables Special Interest Groups to generate and post Tables of Contents for proceedings of their conferences enabling visitors to download the definitive version of the contents from the ACM Digital Library at no charge.

Downloads of these articles are captured in official ACM statistics, improving the accuracy of usage and impact measurements. Consistently linking to definitive versions of ACM articles should reduce user confusion over article versioning.

Conferences are listed by year, 2014 – 2016 and by event.

A step in the right direction.

Do you know if the digital library allows bulk downloading of search result metadata?

It didn’t the last time I had a digital library subscription. Contacting the secret ACM committee that decides on web features was verboten.

Enjoy this improvement in access while waiting for ACM access bottlenecks to wither and die.

December 29, 2016

Continuous Unix commit history from 1970 until today

Filed under: Computer Science,Linux OS — Patrick Durusau @ 5:49 pm

Continuous Unix commit history from 1970 until today

From the webpage:

The history and evolution of the Unix operating system is made available as a revision management repository, covering the period from its inception in 1970 as a 2.5 thousand line kernel and 26 commands, to 2016 as a widely-used 27 million line system. The 1.1GB repository contains about half a million commits and more than two thousand merges. The repository employs Git system for its storage and is hosted on GitHub. It has been created by synthesizing with custom software 24 snapshots of systems developed at Bell Labs, the University of California at Berkeley, and the 386BSD team, two legacy repositories, and the modern repository of the open source FreeBSD system. In total, about one thousand individual contributors are identified, the early ones through primary research. The data set can be used for empirical research in software engineering, information systems, and software archaeology.

You can read more details about the contents, creation, and uses of this repository through this link.

Two repositories are associated with the project:

  • unix-history-repo is a repository representing a reconstructed version of the Unix history, based on the currently available data. This repository will be often automatically regenerated from scratch, so this is not a place to make contributions. To ensure replicability its users are encouraged to fork it or archive it.
  • unix-history-make is a repository containing code and metadata used to build the above repository. Contributions to this repository are welcomed.

Not everyone will find this exciting but this rocks as a resource for:

empirical research in software engineering, information systems, and software archaeology

Need to think seriously about putting this on a low-end laptop and sealing it up in a Faraday cage.

Just in case. 😉

December 22, 2016

Low fat computing

Filed under: Computer Science,Forth,Graphics,Visualization — Patrick Durusau @ 8:53 pm

Low fat computing by Karsten Schmidt

A summary of the presentation by Schmidt by Malcolm Sparks, along with the presentation itself.

Lots of strange and 3-D printable eye candy for the first 15 minutes or so with Schmidt’s background. Starts to really rock around 20 minutes in with Forth code and very low level coding.

To get a better idea of what Schmidt has been doing, see his website: thi.ng, or his Forth repl in Javascript, http://forth.thi.ng/, or his GitHub repository or at: Github: thi.ng

Stop by at http://toxiclibs.org/ although the material there looks dated.

November 17, 2016

Operating Systems Design and Implementation (12th USENIX Symposium)

Filed under: Computer Science,CS Lectures,Cybersecurity,Security — Patrick Durusau @ 9:59 pm

Operating Systems Design and Implementation (12th USENIX Symposium) – Savannah, GA, USA, November 2-4, 2016.

Message from the OSDI ’16 Program Co-Chairs:

We are delighted to welcome to you to the 12th USENIX Symposium on Operating Systems Design and Implementation, held in Savannah, GA, USA! This year’s program includes a record high 47 papers that represent the strength of our community and cover a wide range of topics, including security, cloud computing, transaction support, storage, networking, formal verification of systems, graph processing, system support for machine learning, programming languages, troubleshooting, and operating systems design and implementation.

Weighing in at seven hundred and ninety-seven (797) pages, this tome will prove more than sufficient to avoid annual family arguments during the holiday season.

Not to mention this is an opportunity to hone your skills to a fine edge.

November 3, 2016

Understanding the fundamentals of attacks (Theory of Exploitation)

Filed under: Computer Science,Cybersecurity,Security — Patrick Durusau @ 8:31 pm

Understanding the fundamentals of attacks – What is happening when someone writes an exploit? by Halvar Flake / Thomas Dullien.

The common “bag of tricks” as Halvar refers to them for hacking, does cover all the major data breaches for the last 24 months.

No zero-day exploits.

Certainly none of the deep analysis offered by Halvar here.

Still, you owe it to yourself and your future on one side or the other of computer security, to review these slides and references carefully.

Even though Halvar concludes (in part)

Exploitation is programming emergent weird machines.

It does not require EIP/RIP, and is not a bad of tricks.

Theory of exploitation is still in embryonic stage.

Imagine the advantages of having mastered the art of exploitation theory at its inception.

In an increasingly digital world, you may be worth your own weight in gold. 😉

PS: Specifying the subject identity properties of exploits will assist in organizing them for future use/defense.

One expert hacker is like a highly skilled warrior.

Making exploits easy to discover/use by average hackers is like a skilled warrior facing a company of average fighters.

The outcome will be bloody, but never in doubt.

August 26, 2016

The Hanselminutes Podcast

Filed under: Computer Science,Programming — Patrick Durusau @ 1:07 pm

The Hanselminutes Podcast: Fresh Air for Developers by Scott Hanselman.

I went looking for Felienne’s podcast on code smells and discovered along with it, The Hanselminutes Podcast: Fresh Air for Developers!

Felienne’s podcast is #542 so there is a lot of content to enjoy! (I checked the archive. Yes, there really are 542 episodes as of today.)

Exploring Code Smells in code written by Children

Filed under: Computer Science,Programming — Patrick Durusau @ 10:52 am

Exploring Code Smells in code written by Children (podcast) by Dr. Felienne

From the description:

Felienne is always learning. In exploring her PhD dissertation and her public speaking experience it’s clear that she has no intent on stopping! Most recently she’s been exploring a large corpus of Scratch programs looking for Code Smells. How do children learn how to code, and when they do, does their code “smell?” Is there something we can do when teaching to promote cleaner, more maintainable code?

Felienne discusses a paper due to appear in September on analysis of 250K Scratch programs for code smells.

Thoughts on teaching programmers to detect bug smells?

June 15, 2016

If You Believe In OpenAccess, Do You Practice OpenAccess?

Filed under: Computer Science,Open Access — Patrick Durusau @ 7:48 pm

CSC-OpenAccess LIBRARY

From the webpage:

CSC Open-Access Library aim to maintain and develop access to journal publication collections as a research resource for students, teaching staff, researchers and industrialists.

You can see a complete listing of the journals here.

Before you protest these are not Science or Nature, remember that Science and Nature did not always have the reputations they do today.

Let the quality of your work bolster the reputations of open access publications and attract others to them.

June 12, 2016

How to Run a Russian Hacking Ring [Just like Amway, Mary Kay … + Career Advice]

Filed under: Computer Science,Cybersecurity,Security — Patrick Durusau @ 12:41 pm

How to Run a Russian Hacking Ring by Kaveh Waddell.

From the post:

A man with intense eyes crouches over a laptop in a darkened room, his face and hands hidden by a black ski mask and gloves. The scene is lit only by the computer screen’s eerie glow.

Exaggerated portraits of malicious hackers just like this keep popping up in movies and TV, despite the best efforts of shows like Mr. Robot to depict hackers in a more realistic way. Add a cacophony of news about data breaches that have shaken the U.S. government, taken entire hospital systems hostage, and defrauded the international banking system, and hackers start to sound like omnipotent super-villains.

But the reality is, as usual, less dramatic. While some of the largest cyberattacks have been the work of state-sponsored hackers—the OPM data breach that affected millions of Americans last year, for example, or the Sony hack that revealed Hollywood’s intimate secrets​—the vast majority of the world’s quotidian digital malice comes from garden-variety hackers.

What a downer this would be at career day at the local high school.

Yes, you too can be a hacker but it’s as dull as anything you have seen in Dilbert.

Your location plays an important role in whether Russian hacking ring employment is in your future. Kaveh reports:


Even the boss’s affiliates, who get less than half of each ransom that they extract, make a decent wage. They earned an average of 600 dollars a month, or about 40 percent more than the average Russian worker.

$600/month is ok, if you are living in Russia, not so hot if you aspire to Venice Beach. (It’s too bad the beach cam doesn’t pan and zoom.)

The level of technical skills required for low-lying fruit hacking is falling, meaning more competitors for the low-end. Potential profits are going to fall even further.

The no liability for buggy software will fall sooner rather than later and skilled hackers (I mean security researchers) will find themselves in demand by both plaintiffs and defendants. You will earn more money if you can appear in court, some expert witnesses make $600/hour or more. (Compare the $600/month in Russia.)

Even if you can’t appear in court, for reasons that seem good to you, fleshing out the details of hacks is going to be on demand from all sides.

You may start at the shallow end of the pool but resolve to not stay there. Read deeply, practice everyday, start current on new developments and opportunities, contribute to online communities.

May 31, 2016

“This guy’s arrogance takes your breath away”

Filed under: Computer Science — Patrick Durusau @ 7:32 pm

“This guy’s arrogance takes your breath away” – Letters between John W Backus and Edsger W Dijkstra, 1979 by Jiahao Chen.

From the post:

Item No. 155: Correspondence with Edsger Dijkstra. 1979

At the time of this correspondence, Backus had just won the 1977 Turing Award and had chosen to talk about his then-current research on functional programming (FP) for his award lecture in Seattle. See this pdf of the published version, noting that Backus himself described “significant differences” with the talk that was actually given. Indeed, the transcript at the LoC was much more casual and easier to follow.

Dijkstra, in his characteristically acerbic and hyperbolic style, wrote a scathing public review (EWD 692) and some private critical remarks in what looks like a series of letters with Backus.

From what I can tell, these letters are not part of the E. W. Dijkstra archives at UT Austin, nor are they available online anywhere else. So here they are for posterity.

You won’t find Long form exchanges such as these in present-day near instant bait-reply cycles of email messages.

That’s unfortunate.

Chen has created a Github repository if you are interested in transcribing pre-email documents.

You can help create better access to the history of computer science and see how to craft a cutting remark, as opposed to blurting out the first insult that comes to mind.

Enjoy!

Older Posts »

Powered by WordPress