Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

October 1, 2016

Type-driven Development … [Further Reading]

Filed under: Functional Programming,Types,Uncategorized — Patrick Durusau @ 3:49 pm

The Further Reading slide from Edwin Brady’s presentation Type-driven Development of Communicating Systems in Idris (Lamda World, 2016) was tweeted as an image, eliminating the advantages of hyperlinks.

I have reproduced that slide with the links as follows:

Further Reading

On total functional programming

On interactive programming with dependent types

On types for communicating systems:

On Wadler’s paper, you may enjoy the video of his presentation, Propositions as Sessions or his slides (2016), Propositions as Sessions, Philip Wadler, University of Edinburgh, Betty Summer School, Limassol, Monday 27 June 2016.

April 12, 2016

Coeffects: Context-aware programming languages – Subject Identity As Type Checking?

Filed under: Subject Identity,TMRM,Topic Maps,Types — Patrick Durusau @ 3:14 pm

Coeffects: Context-aware programming languages by Tomas Petricek.

From the webpage:

Coeffects are Tomas Petricek‘s PhD research project. They are a programming language abstraction for understanding how programs access the context or environment in which they execute.

The context may be resources on your mobile phone (battery, GPS location or a network printer), IoT devices in a physical neighborhood or historical stock prices. By understanding the neighborhood or history, a context-aware programming language can catch bugs earlier and run more efficiently.

This page is an interactive tutorial that shows a prototype implementation of coeffects in a browser. You can play with two simple context-aware languages, see how the type checking works and how context-aware programs run.

This page is also an experiment in presenting programming language research. It is a live environment where you can play with the theory using the power of new media, rather than staring at a dead pieces of wood (although we have those too).

(break from summary)

Programming languages evolve to reflect the changes in the computing ecosystem. The next big challenge for programming language designers is building languages that understand the context in which programs run.

This challenge is not easy to see. We are so used to working with context using the current cumbersome methods that we do not even see that there is an issue. We also do not realize that many programming features related to context can be captured by a simple unified abstraction. This is what coeffects do!

What if we extend the idea of context to include the context within which words appear?

For example, writing a police report, the following sentence appeared:

There were 20 or more <proxy string=”black” pos=”noun” synonym=”African” type=”race”/>s in the group.

For display purposes, the string value “black” appears in the sentence:

There were 20 or more blacks in the group.

But a search for the color “black” would not return that report because the type = color does not match type = race.

On the other hand, if I searched for African-American, that report would show up because “black” with type = race is recognized as a synonym for people of African extraction.

Inline proxies are the easiest to illustrate but that is only one way to serialize such a result.

If done in an authoring interface, such an approach would have the distinct advantage of offering the original author the choice of subject properties.

The advantage of involving the original author is that they have an interest in and awareness of the document in question. Quite unlike automated processes that later attempt annotation by rote.

March 9, 2015

Programs and Proofs: Mechanizing Mathematics with Dependent Types

Filed under: Coq,Functional Programming,Proof Theory,Software Engineering,Types — Patrick Durusau @ 3:49 pm

Programs and Proofs: Mechanizing Mathematics with Dependent Types by Ilya Sergey.

From the post:

coq-logo

The picture “Le coq mécanisé” is courtesy of Lilia Anisimova

These lecture notes are the result of the author’s personal experience of learning how to structure formal reasoning using the Coq proof assistant and employ Coq in large-scale research projects. The present manuscript offers a brief and practically-oriented introduction to the basic concepts of mechanized reasoning and interactive theorem proving.

The primary audience of the manuscript are the readers with expertise in software development and programming and knowledge of discrete mathematic disciplines on the level of an undergraduate university program. The high-level goal of the course is, therefore, to demonstrate how much the rigorous mathematical reasoning and development of robust and intellectually manageable programs have in common, and how understanding of common programming language concepts provides a solid background for building mathematical abstractions and proving theorems formally. The low-level goal of this course is to provide an overview of the Coq proof assistant, taken in its both incarnations: as an expressive functional programming language with dependent types and as a proof assistant providing support for mechanized interactive theorem proving.

By aiming these two goals, this manuscript is, thus, intended to provide a demonstration how the concepts familiar from the mainstream programming languages and serving as parts of good programming practices can provide illuminating insights about the nature of reasoning in Coq’s logical foundations and make it possible to reduce the burden of mechanical theorem proving. These insights will eventually give the reader a freedom to focus solely on the essential part of the formal development instead of fighting with the proof assistant in futile attempts to encode the “obvious” mathematical intuition.

One approach to change the current “it works, let’s ship” software development model. Users prefer software that works but in these security conscious times, having software that works and is to some degree secure, is even better.

Looking forward to software with a warranty as a major disruption of the software industry. Major vendors are organized around there being no warranty/liability for software failures. A startup, organized to account for warranty/liability, would be a powerful opponent.

Proof techniques are one way to enable the offering limited warranties for software products.

I first saw this in a tweet by Comp Sci Fact.

January 8, 2015

The Type Theory Podcast

Filed under: Functional Programming,Programming,Types — Patrick Durusau @ 1:58 pm

The Type Theory Podcast

From the about page:

The Type Theory Podcast is a podcast about type theory and its interactions with programming, mathematics, and philosophy. Our goal is to bring interesting research to a wider audience.

I’m not sure how I missed this when it started. 😉

There are three episodes available now:

Episode 1: Peter Dybjer on types and testing

In our inaugural episode, we speak with Peter Dybjer from Chalmers University of Technology. Peter has made significant contributions to type theory, including inductive families, induction-recursion, and categorical models of dependent types. He is generally interested in program correctness, programming language semantics, and the connection between mathematics and programming. Today, we will talk about the relationship between QuickCheck-style testing and proofs and verification in type theory.

Episode 2: Edwin Brady on Idris

In our second episode, we speak with Edwin Brady from the University of St. Andrews. Since 2008, Edwin has been working on Idris, a functional programming language with dependent types. This episode is very much about programming: we discuss the language Idris, its history, its implementation strategies, and plans for the future.

Episode 3: Dan Licata on Homotopy Type Theory

In our third episode, we dicuss homotopy type theory (HoTT) with Wesleyan University’s Dan Licata. Dan has participated during much of the development of HoTT, having completed his PhD at CMU and having been a part of the Institute for Advanced Study’s special year on the subject. In our interview, we discuss the basics of HoTT, some potential applications in both mathematics and computing, as well as ongoing work on computation, univalence, and cubes.

Each episode has links to additional reading materials and resources.

Enjoy!

December 27, 2014

Software Foundations

Filed under: Coq,Functional Programming,Programming,Proof Theory,Software Engineering,Types — Patrick Durusau @ 4:49 pm

Software Foundations by Benjamin Pierce and others.

From the preface:

This electronic book is a course on Software Foundations, the mathematical underpinnings of reliable software. Topics include basic concepts of logic, computer-assisted theorem proving and the Coq proof assistant, functional programming, operational semantics, Hoare logic, and static type systems. The exposition is intended for a broad range of readers, from advanced undergraduates to PhD students and researchers. No specific background in logic or programming languages is assumed, though a degree of mathematical maturity will be helpful.

One novelty of the course is that it is one hundred per cent formalized and machine-checked: the entire text is literally a script for Coq. It is intended to be read alongside an interactive session with Coq. All the details in the text are fully formalized in Coq, and the exercises are designed to be worked using Coq.

The files are organized into a sequence of core chapters, covering about one semester’s worth of material and organized into a coherent linear narrative, plus a number of “appendices” covering additional topics. All the core chapters are suitable for both graduate and upper-level undergraduate students.

This looks like a real treat!

Imagine security in a world where buggy software (by error and design) wasn’t patched by more buggy software (by error and design) and protected by security software, which is also buggy (by error and design). Would that change the complexion of current security issues?

I first saw this in a tweet by onepaperperday.

PS: Sony got hacked, again. Rumor is that this latest Sony hack was an extra credit exercise for a 6th grade programming class.

December 16, 2014

Type systems and logic

Filed under: Logic,Types — Patrick Durusau @ 4:27 pm

Type systems and logic by Alyssa Carter (From Code Word – Hacker School)

From the post:

An important result in computer science and type theory is that a type system corresponds to a particular logic system.

How does this work? The basic idea is that of the Curry-Howard Correspondence. A type is interpreted as a proposition, and a value is interpreted as a proof of the proposition corresponding to its type. Most standard logical connectives can be derived from this idea: for example, the values of the pair type (A, B) are pairs of values of types A and B, meaning they’re pairs of proofs of A and B, which means that (A, B) represents the logical conjunction “A && B”. Similarly, logical disjunction (“A | | B”) corresponds to what’s called a “tagged union” type: a value (proof) of Either A B is either a value (proof) of A or a value (proof) of B.

This might be a lot to take in, so let’s take a few moments for concrete perspective.

Types like Int and String are propositions – you can think of simple types like these as just stating that “an Int exists” or “a String exists”. 1 is a proof of Int, and "hands" is a proof of String. (Int, String) is a simple tuple type, stating that “there exists an Int and there exists a String”. (1, "hands") is a proof of (Int, String). Finally, the Either type is a bit more mysterious if you aren’t familiar with Haskell, but the type Either a b can contain values of type a tagged as the “left” side of an Either or values of type b tagged as the “right” side of an Either. So Either Int String means “either there exists an Int or there exists a String”, and it can be proved by either Left 1 or Right "hands". The tags ensure that you don’t lose any information if the two types are the same: Either Int Int can be proved by Left 1 or Right 1, which can be distinguished from each other by their tags.

Heavy sledding but should very much be on your reading list.

It has gems like:

truth is useless for computation and proofs are not

I would have far fewer objections to some logic/ontology discussions if they limited their claims to computation.

People are free to accept or reject any result of computation. Depends on their comparison of the result to their perception of the world.

Case in point, the five year old who could not board a plane because they shared a name with someone on the no-fly list.

One person, a dull TSA agent, could not see beyond the result of a calculation on the screen.

Everyone else could see a five year old who, while cranky, wasn’t on the no-fly list.

I first saw this in a tweet by Rahul Goma Phulore.

October 15, 2014

Inductive Graph Representations in Idris

Filed under: Functional Programming,Graphs,Types — Patrick Durusau @ 8:54 am

Inductive Graph Representations in Idris by Michael R. Bernstein.

An early exploration on Inductive Graphs and Functional Graph Algorithms by Martin Erwig.

Abstract (of Erwig’s paper):

We propose a new style of writing graph algorithms in functional languages which is based on an alternative view of graphs as inductively defined data types. We show how this graph model can be implemented efficiently and then we demonstrate how graph algorithms can be succinctly given by recursive function definitions based on the inductive graph view. We also regard this as a contribution to the teaching of algorithms and data structures in functional languages since we can use the functional-graph algorithms instead of the imperative algorithms that are dominant today.

You can follow Michael at: @mrb_bk or https://github.com/mrb or his blog: http://michaelrbernste.in/.

More details on Idris: A Language With Dependent Types.

June 27, 2014

Propositions as Types

Filed under: Computer Science,Logic,Programming,Types — Patrick Durusau @ 12:56 pm

Propositions as Types by Philip Wadler.

From the Introduction::

Powerful insights arise from linking two fields of study previously thought separate. Examples include Descartes’s coordinates, which links geometry to algebra, Planck’s Quantum Theory, which links particles to waves, and Shannon’s Information Theory, which links thermodynamics to communication. Such a synthesis is offered by the principle of Propositions as Types, which links logic to computation. At first sight it appears to be a simple coincidence—almost a pun—but it turns out to be remarkably robust, inspiring the design of automated proof assistants and programming languages, and continuing to influence the forefronts of computing.

Propositions as Types is a notion with many names and many origins. It is closely related to the BHK Interpretation, a view of logic developed by the intuitionists Brouwer, Heyting, and Kolmogorov in the 1930s. It is often referred to as the Curry-Howard Isomorphism, referring to a correspondence observed by Curry in 1934 and refined by Howard in 1969 (though not published until 1980, in a Festschrift dedicated to Curry). Others draw attention to significant contributions from de Bruijn’s Automath and Martin-Löf’s Type Theory in the 1970s. Many variant names appear in the literature, including Formulae as Types, Curry-Howard-de Bruijn Correspondence, Brouwer’s Dictum, and others.

Propositions as Types is a notion with depth. It describes a correspondence between a given logic and a given programming language, for instance, between Gentzen’s intuitionistic natural deduction (a logic) and Church’s simply-typed lambda calculus (which may be viewed as a programming language). At the surface, it says that for each proposition in the logic there is a corresponding type in the programming language—and vice versa…

Important work even if it is very heavy sledding!

BTW, Wadler mentions two textbook treatments of the subject:

M. H. Sørensen and P. Urzyczyn. Lectures on the Curry-Howard isomorphism. Elsevier, 2006. Amazon has it listed for $146.33.

S. Thompson. Type Theory and Functional Programming. Addison-Wesley, 1991. Better luck here, out of print and posted online by the author: https://www.cs.kent.ac.uk/people/staff/sjt/TTFP/. Errata page was last updated October 2013.

I just glanced at 4.10 Equality and 5.1 Assumptions – 5.2 Naming and abbreviations in Thompson and it promises to be an interesting read!

Enjoy!

I first saw this in a tweet by Chris Ford.

May 25, 2014

Verified Networking using Dependent Types

Filed under: Dependency,Functional Programming,Programming,Types — Patrick Durusau @ 4:30 pm

Verified Networking using Dependent Types by Simon Fowler.

Abstract:

Strongly, statically typed functional programming languages have found a strong grounding in academia and industry for a variety of reasons: they are concise, their type systems provide additional static correctness guarantees, and the structured management of side effects aids easier reasoning about the operation of programs, to name but a few.

Dependently-typed languages take these concepts a step further: by allowing types to be predicated on values, it is possible to impose arbitrarily specific type constraints on functions, resulting in increased confidence about their runtime behaviour.

This work demonstrates how dependent types may be used to increase confidence in network applications. We show how dependent types may be used to enforce resource usage protocols inherent in C socket programming, providing safety guarantees, and examine how a dependently- typed embedded domain-specific language may be used to enforce the conformance of packets to a given structure. These concepts are explored using two larger case studies: packets within the Domain Name System (DNS) and a networked game.

The use of statically typed functional programming languages is spreading. Fowler’s dissertation is yet another illustration of that fact.

When you read:

…examine how a dependently- typed embedded domain-specific language may be used to enforce the conformance of packets to a given structure.

Do you also hear:

…examine how a dependently- typed embedded domain-specific language may be used to enforce the conformance of proxies to a given structure.

??

May 17, 2014

Types, and two approaches to problem solving

Filed under: Computer Science,Problem Solving,Types — Patrick Durusau @ 6:39 pm

Types, and two approaches to problem solving by Dan Piponi.

From the post:

There are two broad approaches to problem solving that I see frequently in mathematics and computing. One is attacking a problem via subproblems, and another is attacking a problem via quotient problems. The former is well known though I’ll give some examples to make things clear. The latter can be harder to recognise but there is one example that just about everyone has known since infancy.

I don’t want to spoil Dan’s surprise so all I can say is go read the post!

An intuitive appreciation for types may help you with the full monty of types.

March 28, 2014

A Practical Optional Type System for Clojure [Types for Topic Maps?]

Filed under: Clojure,Programming,Types — Patrick Durusau @ 10:36 am

A Practical Optional Type System for Clojure by Ambrose Bonnaire-Sergeant.

Abstract:

Dynamic programming languages often abandon the advantages of static type checking in favour of their characteristic convenience and flexibility. Static type checking eliminates many common user errors at compile-time that are otherwise unnoticed, or are caught later in languages without static type checking. A recent trend is to aim to combine the advantages of both kinds of languages by adding optional static type systems to languages without static type checking, while preserving the idioms and style of the language.

This dissertation describes my work on designing an optional static type system for the Clojure programming language, a dynamically typed dialect of Lisp, based on the lessons learnt from several projects, primarily Typed Racket. This work includes designing and building a type checker for Clojure running on the Java Virtual Machine. Several experiments are conducted using this prototype, particularly involving existing Clojure code that is sufficiently complicated that type checking increases confidence that the code is correct. For example, nearly all of algo.monads, a Clojure Contrib library for monadic programming, is able to be type checked. Most monad, monad transformer, and monadic function definitions can be type checked, usually by adding type annotations in natural places like function definitions.

There is significant future work to fully type check all Clojure features and idioms. For example, multimethod definitions and functions with particular constraints on the number of variable arguments they accept (particularly functions taking only an even number of variable arguments) are troublesome. Also, there are desirable features from the Typed Racket project that are missing, such as automatic runtime contract generation and a sophisticated blame system, both which are designed to improve error messages when mixing typed and untyped code in similar systems.

Overall, the work described in this dissertation leads to the conclusion that it appears to be both practical and useful to design and implement an optional static type system for the Clojure programming language.

Information retrieval that relies upon merging representatives of the same subject would benefit from type checking.

In XTM we rely upon string equivalence of URIs for merging of topics. Leaves you will visual inspection to audit merging.

I could put:

http://www.durusau.net/general/background.html, and

http://en.wikipedia.org/wiki/Lion_king

as subject identifiers in a single topic and a standard XTM processor would merrily merge topics with those subject identifiers together.

Recalling the rules of the TMDM to be:

Equality rule:

Two topic items are equal if they have:

  • at least one equal string in their [subject identifiers] properties,
  • at least one equal string in their [item identifiers] properties,
  • at least one equal string in their [subject locators] properties,
  • an equal string in the [subject identifiers] property of the one topic item and the [item identifiers] property of the other, or
  • the same information item in their [reified] properties.

Adding data types to subject identifiers could alert authors to merge errors long before they may or may not be discovered by users.

Otherwise merge errors in topic maps may lay undetected and uncorrected for some indeterminate period of time. (Sounds like software bugs doesn’t it?)

I first saw this in a tweet by mrb.

February 5, 2014

Type Theory Foundations

Filed under: CS Lectures,Types — Patrick Durusau @ 11:54 am

Robert Harper’s lectures from the Oregon Programming Languages School 2012, University of Oregon.

Lecture 1

Lecture 2

Lecture 3

Lecture 4

Lecture 5

Lecture 6

If you are going to follow Walter Bright in writing a new computer language, you will need to study types.

Taught along with Category Theory Foundations and Proof Theory Foundations.

January 30, 2014

Certified Programming with Dependent Types

Filed under: Dependency,Programming,Types — Patrick Durusau @ 8:43 pm

Certified Programming with Dependent Types by Adam Chlipala.

From the introduction:

We would all like to have programs check that our programs are correct. Due in no small part to some bold but unfulfilled promises in the history of computer science, today most people who write software, practitioners and academics alike, assume that the costs of formal program verification outweigh the benefits. The purpose of this book is to convince you that the technology of program verification is mature enough today that it makes sense to use it in a support role in many kinds of research projects in computer science. Beyond the convincing, I also want to provide a handbook on practical engineering of certified programs with the Coq proof assistant. Almost every subject covered is also relevant to interactive computer theorem-proving in general, such as for traditional mathematical theorems. In fact, I hope to demonstrate how verified programs are useful as building blocks in all sorts of formalizations.

The idea of certified program features prominently in this book’s title. Here the word “certified” does not refer to governmental rules for how the reliability of engineered systems may be demonstrated to sufficiently high standards. Rather, this concept of certification, a standard one in the programming languages and formal methods communities, has to do with the idea of a certificate, or formal mathematical artifact proving that a program meets its specification. Government certification procedures rarely provide strong mathematical guarantees, while certified programming provides guarantees about as strong as anything we could hope for. We trust the definition of a foundational mathematical logic, we trust an implementation of that logic, and we trust that we have encoded our informal intent properly in formal specifications, but few other opportunities remain to certify incorrect software. For compilers and other programs that run in batch mode, the notion of a certifying program is also common, where each run of the program outputs both an answer and a proof that the answer is correct. Any certifying program can be composed with a proof checker to produce a certified program, and this book focuses on the certified case, while also introducing principles and techniques of general interest for stating and proving theorems in Coq.

It is hard to say whether this effort at certified programming will prove to be any more successful than Z notation.

On the other hand, the demand for programs that are provably free of government intrusion or backdoors, is at an all time high.

Government overreaching, overreaching that was disproportionate to any rational goal, will power the success of open source programming and the advent of certified programs.

Ironic that such a pernicious activity will have such unintended good results.

I first saw this in a tweet by Computer Science.

December 4, 2013

Homotopy Type Theory

Filed under: Homology,Types — Patrick Durusau @ 4:10 pm

Homotopy Type Theory by Robert Harper. (Course with video lectures, notes, etc.)

Synopsis:

This is a graduate research seminar on Homotopy Type Theory (HoTT), a recent enrichment of Intuitionistic Type Theory (ITT) to include "higher-dimensional" types. The dimensionality of a type refers to the structure of its paths, the constructive witnesses to the equality of pairs of elements of a type, which themselves form a type, the identity type. In general a type is infinite dimensional in the sense that it exhibits non-trivial structure at all dimensions: it has elements, paths between elements, paths between paths, and so on to all finite levels. Moreover, the paths at each level exhibit the algebraic structure of a (higher) groupoid, meaning that there is always the "null path" witnessing reflexivity, the "inverse" path witnessing symmetry, and the "concatenation" of paths witnessing transitivity such that group-like laws hold "up to higher homotopy". Specifically, there are higher-dimensional paths witnessing the associative, unital, and inverse laws for these operations. Altogether this means that a type is a weak ∞-groupoid.

The significance of the higher-dimensional structure of types lies in the concept of a type-indexed family of types. Such families exhibit the structure of a fibration, which means that a path between two indices "lifts" to a transport mapping between the corresponding instances of the family that is, in fact, an equivalence. Thinking of paths as constructive witnesses for equality, this amounts to saying that equal indices give rise to equivalent types, and hence, by univalence, equal elements of the universe in which the family is valued. Thus, for example, if we think of the interval I as a type with two endpoints connected by a path, then an I-indexed family of types must assign equivalent types to the endpoints. In contrast the type B of booleans consists of two disconnected points, so that a B-indexed family of types may assign unrelated types to the two points of B. Similarly, mappings from I into another type A must assign connected points in A to the endpoints of the interval, whereas mappings from B into A are free to assign arbitrary points of A to the two booleans. These preservation principles are central to the structure of HoTT.

In many cases the path structure of a type becomes trivial beyond a certain dimension, called the level of the type. By convention the levels start at -2 and continue through -1, 0, 1, 2, and so on indefinitely. At the lowest, -2, level, the path structure of a type is degenerate in that there is an element to which all other elements are equal; such a type is said to be contractible, and is essentially a singleton. At the next higher level, -1, the type of paths between any two elements is contractible (level -2), which means that any two elements are equal, if there are any elements at all; such as type is a sub-singleton or h-proposition. At the next level, 0, the type of paths between paths between elements is contractible, so that any two elements are equal "in at most one way"; such a type is a set whose types of paths (equality relations) are all h-prop’s. Continuing in this way, types of level 1 are groupoids, those of level 2 are 2-groupoids, and so on for all finite levels.

ITT is capable of expressing only sets, which are types of level 0. Such types may have elements, and two elements may be considered equal in at most one way. A large swath of (constructive) mathematics may be formulated using only sets, and hence is amenable to representation in ITT. Computing applications, among others, require more than just sets. For example, it is often necessary to suppress distinctions among elements of a type so as to avoid over-specification; this is called proof irrelevance. Traditionally ITT has been enriched with an ad hoc treatment of proof irrelevance by introducing a universe of "propositions" with no computational content. In HoTT such propositions are types of level -1, requiring no special treatment or distinction. Such types arise by propositional truncation of a type to render degenerate the path structure of a type above level -1, ensuring that any two elements are equal in the sense of having a path between them.

Propositional truncation is just one example of a higher inductive type, one that is defined by specifying generators not only for its elements, but also for its higher-dimensional paths. The propositional truncation of a type is one that includes all of the elements of the type, and, in addition, a path between any two elements, rendering them equal. It is a limiting case of a quotient type in which only certain paths between elements are introduced, according to whether they are deemed to be related. Higher inductive types also permit the representation of higher-dimensional objects, such as the spheres of arbitrary dimension, as types, simply by specifying their "connectivity" properties. For example, the topological circle consists of a base point and a path starting and ending at that point, and the topological disk may be thought of as two half circles that are connected by a higher path that "fills in" the interior of the circle. Because of their higher path structure, such types are not sets, and neither are constructions such as the product of two circles.

The univalence axiom implies that an equivalence between types (an "isomorphism up to isomorphism") determines a path in a universe containing such types. Since two types can be equivalent in many ways (for example, there can be distinct bijections between two sets), univalence gives rise to types that are not sets, but rather are of a higher level, or dimension. The univalence axiom is mathematically efficient because it allows us to treat equivalent types as equal, and hence interchangeable in all contexts. In informal settings such identifications are often made by convention; in formal homotopy type theory such identifications are true equations.

If you think data types are semantic primitives with universal meaning/understanding, feel free to ignore this posting.

Data types can be usefully treated “as though” they are semantic primitives, but mistaking convenience for truth can be expensive.

The never ending cycle of enterprise level ETL for example. Even when it ends well it is expensive.

And there are all the cases where ETL or data integration don’t end well.

Homotopy Type Theory may not be the answer to those problems, but our current practices are known to not work.

Why not bet on an uncertain success versus the certainty of expense and near-certainty of failure?

June 30, 2013

Type Theory & Functional Programming [Types in Topic Maps]

Filed under: Mathematics,Programming,Types — Patrick Durusau @ 1:06 pm

Type Theory & Functional Programming by Simon Thompson.

From the introduction:

Constructive Type theory has been a topic of research interest to computer scientists, mathematicians, logicians and philosophers for a number of years. For computer scientists it provides a framework which brings together logic and programming languages in a most elegant and fertile way: program development and verification can proceed within a single system. Viewed in a different way, type theory is a functional programming language with some novel features, such as the totality of all its functions, its expressive type system allowing functions whose result type depends upon the value of its input, and sophisticated modules and abstract types whose interfaces can contain logical assertions as well as signature information. A third point of view emphasizes that programs (or functions) can be extracted from proofs in the logic.

Up until now most of the material on type theory has only appeared in proceedings of conferences and in research papers, so it seems appropriate to try to set down the current state of development in a form accessible to interested final-year undergraduates, graduate students, research workers and teachers in computer science and related fields – hence this book.

The book can be thought of as giving both a first and a second course in type theory. We begin with introductory material on logic and functional programming, and follow this by presenting the system of type theory itself, together with many examples. As well as this we go further, looking at the system from a mathematical perspective, thus elucidating a number of its important properties. Then we take a critical look at the profusion of suggestions in the literature about why and how type theory could be augmented. In doing this we are aiming at a moving target; it must be the case that further developments will have been made before the book reaches the press. Nonetheless, such an survey can give the reader a much more developed sense of the potential of type theory, as well as giving the background of what is to come.

The goal posts of type theory have moved since 1999, when this work was published, but the need to learn the foundations of type theory has not.

In a topic map context, consider the potential of types that define:

  1. applicable merging rules
  2. allowable sub-types
  3. permitted roles
  4. presence of other values (by type or value)

among other potential rules.

December 8, 2012

Practical Foundations for Programming Languages

Filed under: CS Lectures,Programming,Types — Patrick Durusau @ 5:32 pm

PFPL is out! by Robert Harper.

From the post:

Practical Foundations for Programming Languages, published by Cambridge University Press, is now available in print! It can be ordered from the usual sources, and maybe some unusual ones as well. If you order directly from Cambridge using this link, you will get a 20% discount on the cover price (pass it on).

Since going to press I have, inevitably, been informed of some (so far minor) errors that are corrected in the online edition. These corrections will make their way into the second printing. If you see something fishy-looking, compare it with the online edition first to see whether I may have already corrected the mistake. Otherwise, send your comments to me.rwh@cs.cmu.edu

By the way, the cover artwork is by Scott Draves, a former student in my group, who is now a professional artist as well as a researcher at Google in NYC. Thanks, Scott!

Update: The very first author’s copy hit my desk today!

Congratulations to Robert!

The holidays are upon us so order early and often!

August 25, 2012

Polarity in Type Theory

Filed under: Category Theory,Types — Patrick Durusau @ 2:33 pm

Polarity in Type Theory by Robert Harper.

From the post:

There has recently arisen some misguided claims about a supposed opposition between functional and object-oriented programming. The claims amount to a belated recognition of a fundamental structure in type theory first elucidated by Jean-Marc Andreoli, and developed in depth by Jean-Yves Girard in the context of logic, and by Paul Blain-Levy and Noam Zeilberger in the context of programming languages. In keeping with the general principle of computational trinitarianism, the concept of polarization has meaning in proof theory, category theory, and type theory, a sure sign of its fundamental importance.

Polarization is not an issue of language design, it is an issue of type structure. The main idea is that types may be classified as being positive or negative, with the positive being characterized by their structure and the negative being characterized by their behavior. In a sufficiently rich type system one may consider, and make effective use of, both positive and negative types. There is nothing remarkable or revolutionary about this, and, truly, there is nothing really new about it, other than the terminology. But through the efforts of the above-mentioned researchers, and others, we have learned quite a lot about the importance of polarization in logic, languages, and semantics. I find it particularly remarkable that Andreoli’s work on proof search turned out to also be of deep significance for programming languages. This connection was developed and extended by Zeilberger, on whose dissertation I am basing this post.

The simplest and most direct way to illustrate the ideas is to consider the product type, which corresponds to conjunction in logic. There are two possible ways that one can formulate the rules for the product type that from the point of view of inhabitation are completely equivalent, but from the point of view of computation are quite distinct. Let us first state them as rules of logic, then equip these rules with proof terms so that we may study their operational behavior. For the time being I will refer to these as Method 1 and Method 2, but after we examine them more carefully, we will find more descriptive names for them.

Best read in the morning with a fresh cup of coffee (or whenever you do your best work).

Can’t talk about equivalence without types. (Well, not interchangeably.)

August 11, 2012

OPLSS 2012

Filed under: Language,Language Design,Programming,Types — Patrick Durusau @ 3:42 pm

OPLSS 2012 by Robert Harper.

From the post:

The 2012 edition of the Oregon Programming Languages Summer School was another huge success, drawing a capacity crowd of 100 eager students anxious to learn the latest ideas from an impressive slate of speakers. This year, as last year, the focus was on languages, logic, and verification, from both a theoretical and a practical perspective. The students have a wide range of backgrounds, some already experts in many of the topics of the school, others with little or no prior experience with logic or semantics. Surprisingly, a large percentage (well more than half, perhaps as many as three quarters) had some experience using Coq, a large uptick from previous years. This seems to represent a generational shift—whereas such topics were even relatively recently seen as the province of a few zealots out in left field, nowadays students seem to accept the basic principles of functional programming, type theory, and verification as a given. It’s a victory for the field, and extremely gratifying for those of us who have been pressing forward with these ideas for decades despite resistance from the great unwashed. But it’s also bittersweet, because in some ways it’s more fun to be among the very few who have created the future long before it comes to pass. But such are the proceeds of success.

As if a post meriting your attention wasn’t enough, it concludes with:

Videos of the lectures, and course notes provided by the speakers, are all available at the OPLSS 12 web site.

Just a summary of what you will find:

  • Logical relations — Amal Ahmed
  • Category theory foundations — Steve Awodey
  • Proofs as Processes — Robert Constable
  • Polarization and focalization — Pierre-Louis Curien
  • Type theory foundations — Robert Harper
  • Monads and all that — John Hughes
  • Compiler verification — Xavier Leroy
  • Language-based security — Andrew Myers
  • Proof theory foundations — Frank Pfenning
  • Software foundations in Coq — Benjamin Pierce

Enjoy!

August 6, 2012

There and Back Again

Filed under: CS Lectures,Programming,Types — Patrick Durusau @ 3:18 pm

There and Back Again by Robert Harper.

From the post:

Last fall it became clear to me that it was “now or never” time for completing Practical Foundations for Programming Languages, so I put just about everything else aside and made the big push to completion. The copy editing phase is now complete, the cover design (by Scott Draves) is finished, and its now in the final stages of publication. You can even pre-order a copy on Amazon; it’s expected to be out in November.

I can already think of ways to improve it, but at some point I had to declare victory and save some powder for future editions. My goal in writing the book is to organize as wide a body of material as I could manage in a single unifying framework based on structural operational semantics and structural type systems. At over 600 pages the manuscript is at the upper limit of what one can reasonably consider a single book, even though I strived for concision throughout.

Quite a lot of the technical development is original, and does not follow along traditional lines. For example, I completely decouple the concepts of assignment, reference, and storage class (heap or stack) from one another, which makes clear that one may have references to stack-allocated assignables, or make use of heap-allocated assignables without having references to them. As another example, my treatment of concurrency, while grounded in the process calculus tradition, coheres with my treatment of assignables, but differs sharply from conventional accounts (and suffers none of their pathologies in the formulation of equivalence).

From the preface:

Types are the central organizing principle of the theory of programming languages. Language features are manifestations of type structure. The syntax of a language is governed by the constructs that define its types, and its semantics is determined by the interactions among those constructs. The soundness of a language design—the absence of ill-defined programs—follows naturally.

The purpose of this book is to explain this remark. A variety of programming language features are analyzed in the unifying framework of type theory. A language feature is defined by its statics, the rules governing the use of the feature in a program, and its dynamics, the rules defining how programs using this feature are to be executed. The concept of safety emerges as the coherence of the statics and the dynamics of a language.

In this way we establish a foundation for the study of programming languages. But why these particular methods? The main justification is provided by the book itself. The methods we use are both precise and intuitive, providing a uniform framework for explaining programming language concepts. Importantly, these methods scale to a wide range of programming language concepts, supporting rigorous analysis of their properties. Although it would require another book in itself to justify this assertion, these methods are also practical in that they are directly applicable to implementation and uniquely effective as a basis for mechanized reasoning. No other framework offers as much.

Now that Robert has lunged across the author’s finish line, which one of us will incorporate his thinking into our own?

December 18, 2011

Error Handling, Validation and Cleansing with Semantic Types and Mappings

Filed under: Expressor,Mapping,Semantics,Types — Patrick Durusau @ 8:41 pm

Error Handling, Validation and Cleansing with Semantic Types and Mappings by Michael Tarallo.

From the post:

expressor ETL applications can setup data validation rules and error handling in a few ways. The traditional approach with many ETL tools is to build in the rules using the various ETL operators. A more streamlined approached is to also use the power of expressor Semantic Mappings and Semantic Types.

  • Semantic Mappings specify how a variety of characteristics are to be handled when string, number, and date-time data types are mapped from the physical schema (your source) to the logical semantic layer known as the Semantic Type.
  • Semantic Types allow you to define, in business terms, how you want the data and the data model to be represented.

The use of these methods both provide a means of attribute data validation and invoking corrective actions if rules are violated.

  • Data Validation rules can be in the form of pattern matching, value ranges, character lengths, formatting, currency and other specific data type constraints.
  • Corrective Actions can be in the form of null, default and correction value replacements as well as specific operator handling to either skip records or reject them to another operator.

NOTE: Semantic Mapping rules are applied first before Semantic Type rules.

Read more here:

I am still trying to find time to test at least the community edition of the software.

What “extra” time I have now is being soaked up configuring/patching Eclipse to build Nutch, to correct a known problem between Nutch and Solr. I suspect you could sell a packaged version of open source software that has all the paths and classes hard coded into the software. No more setting paths, having inconsistent library versions, etc. Just unpack and run. Store data in separate directory. New version comes out, simply rm – R on the install directory and unpack the new one. That should also include the “.” files. Configuration/patching isn’t a good use of anyone’s time. (Unless you want to sell the results. 😉 )

But I will get to it! Unless someone beats me to it and wants to send me a link to their post that I can cite and credit on my blog.


Two things I would change about Michael’s blog:

Prerequisite: Knowledge of expressor Studio and dataflows. You can find tutorials and documentation here

To read:

Prerequisites:

  • expressor software (community or 30-day free trial) here.
  • Knowledge of expressor Studio and dataflows. You can find tutorials and documentation here

And, well, not Michael’s blog but on the expressor download page, if the desktop/community edition is “expressor Studio” then call it that on the download page.

Don’t use different names for a software package and expect users to sort it out. Not if you want to encourage downloads and sales anyway. Surveys show you have to wait until they are paying customers to start abusing them. 😉

October 19, 2011

Ptolemy Project

Filed under: Polymorphism,Semantics,Types — Patrick Durusau @ 3:16 pm

Ptolemy Project: heterogeneous modeling and design

If you think you have heard me use the name Ptolemy in this blog, you would be correct. Not the same one, Ptolemy V, whose decree was recorded on the Rosetta Stone. See: An Early Example of Collocation. There are even earlier multi-lingual texts, I need to track down good images of them and do a post about them.

Back to the Ptolemy Project. Webpage reads in part:

The Ptolemy project studies modeling, simulation, and design of concurrent, real-time, embedded systems. The focus is on assembly of concurrent components. The key underlying principle in the project is the use of well-defined models of computation that govern the interaction between components. A major problem area being addressed is the use of heterogeneous mixtures of models of computation. A software system called Ptolemy II is being constructed in Java….

One of their current research thrusts:

Abstract semantics: Domain polymorphism, behavioral type systems, meta-modeling of semantics, comparative models of computation.

BTW, this is the core engine for the Kepler project.

September 28, 2011

Practical Foundations for Programming Languages

Filed under: CS Lectures,Programming,Types — Patrick Durusau @ 7:33 pm

Practical Foundations for Programming Languages (pdf) by Robert Harper, Carnegie Mellon University.

From Chapter 1, page 3:

Programming languages are languages, a means of expressing computations in a form comprehensible to both people and machines. The syntax of a language specifies the means by which various sorts of phrases (expressions, commands, declarations, and so forth) may be combined to form programs. But what sort of thing are these phrases? What is a program made of?

The informal concept of syntax may be seen to involve several distinct concepts. The surface, or concrete, syntax is concerned with how phrases are entered and displayed on a computer. The surface syntax is usually thought of as given by strings of characters from some alphabet (say, ASCII or UniCode). The structural, or abstract, syntax is concerned with the structure of phrases, specifically how they are composed from other phrases. At this level a phrase is a tree, called an abstract syntax tree, whose nodes are operators that combine several phrases to form another phrase. The binding structure of syntax is concerned with the introduction and use of identifiers: how they are declared, and how declared identifiers are to be used. At this level phrases are abstract binding trees, which enrich abstract syntax trees with the concepts of binding and scope.

In this chapter we prepare the ground for all of our later work by defining precisely what are strings, abstract syntax trees, and abstract binding trees. The definitions are a bit technical, but are fundamentally quite simple and intuitive. It is probably best to skim this chapter on first reading, returning to it only as the need arises.

I am always amused when authors counsel readers to “skim” an early chapter and to return to it when in need. That works for the author, who already knows the material in the first chapter cold, works less well in my experience as a reader. How will I be aware that some future need could be satisfied by re-reading the first chapter? The first chapter is only nine (9) pages out of five hundred and seventy (570) so my suggestion would be to get the first chapter out of the way with a close reading.

From the preface:

This is a working draft of a book on the foundations of programming languages. The central organizing principle of the book is that programming language features may be seen as manifestations of an underlying type structure that governs its syntax and semantics. The emphasis, therefore, is on the concept of type, which codifies and organizes the computational universe in much the same way that the concept of set may be seen as an organizing principle for the mathematical universe. The purpose of this book is to explain this remark.

I think it is the view that “the concept of type…codifies and organizes the computational universe” that I find attractive. That being the case, we are free to construct computational universes that best fit our purposes, as opposed to fitting our purposes to particular computational universes.


Update: August 6, 2012 – First edition completed, see: There and Back Again

Powered by WordPress