Archive for the ‘Constraint Programming’ Category

Recreational Constraint Programmer

Monday, November 16th, 2015 [Embedding of the video disabled at the source. Follow the link.]

From the description:

Many of us have hazy memories of finite state machines from computer science theory classes in college. But finite state machines (FSMs) have real, practical value, and it is useful to know how to build and apply them in Clojure. For example, FSMs have long been popular to model game AIs and workflow rules, and FSMs provide the behind-the-scenes magic that powers Java’s regexes and core.async’s go blocks. In this talk, we’ll look at two programming puzzles that, suprisingly, have very elegant solutions when looked at through the lens of FSMs, with code demonstrations using two different Clojure libraries for automata (automat and reduce-fsm), as well as loco, a Clojure constraint solver.

If you have never heard anyone describe themselves as a “recreational constraint programmer,” you really need to see this video!

If you think about having a single representative for a subject as a constraint on a set of topics, the question becomes what properties must each topic have to facilitate that constraint?

Some properties, such as family names, will lead to over-merging of topics and other properties, such as possession of one and only one social security number, will under-merge topics where a person has multiple social security numbers.

The best code demonstration in the video was the generation of a fairly complex cross-word puzzle, sans the clues for each word. I think the clues were left as an exercise for the reader. 😉

Code Repositories:

Encouraging enough that you might want to revisit regular expressions.


Basic planning algorithm

Sunday, February 10th, 2013

Basic planning algorithm by Ricky Ho.

From the post:

Planning can be think of a Graph search problem where each node in the graph represent a possible “state” of the reality. A directed edge from nodeA to nodeB represent an “action” is available to transition stateA to stateB.

Planning can be thought of another form of constraint optimization problem which is quite different from the one I describe in last blog. In planning case, the constraint is the goal state we want to achieve, where a sequence of actions need to be found to meet the constraint. The sequence of actions will incur cost and our objective is to minimize the cost associated with our chosen actions.

Makes me curious about topic maps that perform merging based on the “cost” of the merge.

That is upon a query, an engine may respond with a merger of topics found on one node but not request data from remote nodes.

In particular thinking of network performance issues which we all experience, waiting for ads to download for example.

Depending upon my requirements, I should be able to evaluate those costs and avoid them.

I may not have the most complete information but that may not be a requirement for some use cases.

Constraint-Based XML Query Rewriting for Data Integration

Monday, April 16th, 2012

Constraint-Based XML Query Rewriting for Data Integration by Cong Yu and Lucian Popa.


We study the problem of answering queries through a target schema, given a set of mappings between one or more source schemas and this target schema, and given that the data is at the sources. The schemas can be any combination of relational or XML schemas, and can be independently designed. In addition to the source-to-target mappings, we consider as part of the mapping scenario a set of target constraints specifying additional properties on the target schema. This becomes particularly important when integrating data from multiple data sources with overlapping data and when such constraints can express data merging rules at the target. We define the semantics of query answering in such an integration scenario, and design two novel algorithms, basic query rewrite and query resolution, to implement the semantics. The basic query rewrite algorithm reformulates target queries in terms of the source schemas, based on the mappings. The query resolution algorithm generates additional rewritings that merge related information from multiple sources and assemble a coherent view of the data, by incorporating target constraints. The algorithms are implemented and then evaluated using a comprehensive set of experiments based on both synthetic and real-life data integration scenarios.

Who does this sound like?:

Data merging is notoriously hard for data integration and often not dealt with. Integration of scientific data, however, offers many complex scenarios where data merging is required. For example, proteins (each with a unique protein id) are often stored in multiple biological databases, each of which independently maintains different aspects of the protein data (e.g., structures, biological functions, etc.). When querying on a given protein through a target schema, it is important to merge all its relevant data (e.g., structures from one source, functions from another) given the constraint that protein id identifies all components of the protein.

When target constraints are present, it is not enough to consider only the mappings for query answering. The target instance that a query should “observe” must be defined by the interaction between all the mappings from the sources and all the target constraints. This interaction can be quite complex when schemas and mappings are nested and when the data merging rules can enable each other, possibly, in a recursive way. Hence, one of the first problems that we study in this paper is what it means, in a precise sense, to answer the target queries in the “best” way, given that the target instance is specified, indirectly, via the mappings and the target constraints. The rest of the paper will then address how to compute the correct answers without materializing the full target instance, via two novel algorithms that rewrite the target query into a set of corresponding source queries.

Wrong! 😉

The ACM reports sixty-seven (67) citations of this paper as of today. (Paper published in 2004.) Summaries of any of the citing literature welcome!

The question of data integration persists to this day. I take that to indicate that whatever the merits of this approach, data integration issues remain unsolved.

What are the merits/demerits of this approach?

META’2012 International Conference on Metaheuristics and Nature Inspired Computing

Saturday, November 5th, 2011

META’2012 International Conference on Metaheuristics and Nature Inspired Computing


  • Paper submission: May 15, 2012
  • Session/Tutorial submission: May 15, 2012
  • Paper notification: July 15, 2012
  • Session/Tutorial notification: June 15, 2012
  • Conference: October 27-31, 2012

From the website:

The 4th International Conference on Metaheuristics and Nature Inspired Computing, META’2012, will held in Port El-Kantaoiui (Sousse, Tunisia).

The Conference will be an exchange space thanks to the sessions of the research works presentations and also will integrate tutorials and a vocational training of metaheuristics and nature inspired computing.

The scope of the META’2012 conference includes, but is not limited to:

  • Local search, tabu search, simulated annealing, VNS, ILS, …
  • Evolutionary algorithms, swarm optimization, scatter search, …
  • Emergent nature inspired algorithms: quantum computing, artificial immune systems, bee colony, DNA computing, …
  • Parallel algorithms and hybrid methods with metaheuristics, machine learning, game theory, mathematical programming, constraint programming, co-evolutionary, …
  • Application to: logistics and transportation, telecommunications, scheduling, data mining, engineering design, bioinformatics, …
  • Theory of metaheuristics, landscape analysis, convergence, problem difficulty, very large neighbourhoods, …
  • Application to multi-objective optimization
  • Application in dynamic optimization, problems with uncertainty,bi-level optimization, …

The “proceedings” for Meta ’10 can be seen at: Meta ’10 papers. It would be more accurate to say “extended abstracts” because, for example,

Luis Filipe de Mello Santos, Daniel Madeira, Esteban Clua, Simone Martins and Alexandre Plastino. A parallel GRASP resolution for a GPU architecture

runs all of two (2) pages. As is about the average length of the other twenty (20) papers that I checked.

I like concise writing but two pages to describe a parallel GRASP setup on a GPU architecture? Just an enticement (there is an ugly word I could use) to get you to read the ISI journal with the article.

Conference and its content look very interesting. Can’t say I care for the marketing technique for the journals in question. Not objecting to the marketing of the journals, but don’t say proceedings when what is meant is ads for the journals.