## Archive for the ‘Lisp’ Category

### NLTK 2.3 – Working with Wordnet

Friday, April 12th, 2013

NLTK 2.3 – Working with Wordnet by Vsevolod Dyomkin.

From the post:

I’m a little bit behind my schedule of implementing NLTK examples in Lisp with no posts on topic in March. It doesn’t mean that work on CL-NLP has stopped – I’ve just had an unexpected vacation and also worked on parts, related to writing programs for the excellent Natural Language Processing by Michael Collins Coursera course.

Today we’ll start looking at Chapter 2, but we’ll do it from the end, first exploring the topic of Wordnet.

Vsevolod more than makes up for his absence with his post on Wordnet.

As a sample, consider this graphic of the potential of Wordnet:

Pay particular attention to the coverage of similarity measures.

Enjoy!

### AI Algorithms, Data Structures, and Idioms…

Tuesday, March 19th, 2013

AI Algorithms, Data Structures, and Idioms in Prolog, Lisp and Java by George F. Luger and William A. Stubblefield.

From the introduction:

Writing a book about designing and implementing representations and search algorithms in Prolog, Lisp, and Java presents the authors with a number of exciting opportunities.

The first opportunity is the chance to compare three languages that give very different expression to the many ideas that have shaped the evolution of programming languages as a whole. These core ideas, which also support modern AI technology, include functional programming, list processing, predicate logic, declarative representation, dynamic binding, meta-linguistic abstraction, strong-typing, meta-circular definition, and object-oriented design and programming. Lisp and Prolog are, of course, widely recognized for their contributions to the evolution, theory, and practice of programming language design. Java, the youngest of this trio, is both an example of how the ideas pioneered in these earlier languages have shaped modern applicative programming, as well as a powerful tool for delivering AI applications on personal computers, local networks, and the world wide web.

Where could you go wrong with comparing Prolog, Lisp and Java?

Either for the intellectual exercise or because you want a better understanding of AI, a resource to enjoy!

### NLTK 1.3 – Computing with Language: Simple Statistics

Wednesday, March 6th, 2013

NLTK 1.3 – Computing with Language: Simple Statistics by Vsevolod Dyomkin.

From the post:

Most of the remaining parts of the first chapter of NLTK book serve as an introduction to Python in the context of text processing. I won’t translate that to Lisp, because there’re much better resources explaining how to use Lisp properly. First and foremost I’d refer anyone interested to the appropriate chapters of Practical Common Lisp:

It’s only worth noting that Lisp has a different notion of lists, than Python. Lisp’s lists are linked lists, while Python’s are essentially vectors. Lisp also has vectors as a separate data-structure, and it also has multidimensional arrays (something Python mostly lacks). And the set of Lisp’s list operations is somewhat different from Python’s. List is the default sequence data-structure, but you should understand its limitations and know, when to switch to vectors (when you will have a lot of elements and often access them at random). Also Lisp doesn’t provide Python-style syntactic sugar for slicing and dicing lists, although all the operations are there in the form of functions. The only thing which isn’t easily reproducible in Lisp is assigning to a slice:

Vsevolod continues his journey through chapter 1 of NLTK 1.3 focusing on the statistics (with examples).

### NLTK 1.1 – Computing with Language: …

Monday, March 4th, 2013

NLTK 1.1 – Computing with Language: Texts and Words by Vsevolod Dyomkin.

From the post:

OK, let’s get started with the NLTK book. Its first chapter tries to impress the reader with how simple it is to accomplish some neat things with texts using it. Actually, the underlying algorithms that allow to achieve these results are mostly quite basic. We’ll discuss them in this post and the code for the first part of the chapter can be found in nltk/ch1-1.lisp.

A continuation of Natural Language Meta Processing with Lisp.

Who knows? You might decide that Lisp is a natural language.

### The Rob Warnock Lisp Usenet Archive [Selling Topic Maps]

Sunday, February 24th, 2013

The Rob Warnock Lisp Usenet Archive by Zach Beane.

From the post:

I've been reading and enjoying comp.lang.lisp for over 10 years. I find it important to ignore the noise and seek out material from authors that clearly have something interesting and informative to say.

Rob Warnock has posted neat stuff for many years, both in comp.lang.lisp and comp.lang.scheme. After creating the Erik Naggum archive, Rob was next on my list of authors to archive. It took me a few years, but here it is: the Rob Warnock Lisp Usenet archive. It has 3,265 articles from comp.lang.lisp and comp.lang.scheme from 1995 to 2009, indexed and searchable. I hope it helps you find as many useful articles as I have over the years.

You can imagine my heartbreak when the Eric Naggum archive turned out to be for comp.lang.lisp.

I think Zach’s point, it is “important to ignore the noise and seek out material from authors that clearly have something interesting and informative to say,” is a clue to the difficulty selling topic maps.

Who thinks that is important?

If I am being paid by the hour to sort through search engine results, what is my motivation to do it faster/better?

If I am managing hourly workers, who are doing the sorting of search engine results, won’t doing it faster reduce the payroll I manage?

If my department has the manager with hourly workers and the facilities to house them, what is my motivation for faster/better?

If my company/government agency has the department with the manager with hourly workers and facilities under contract, what is my motivation for faster/better?

If that helps identify who has no motivation for topic maps, who should be interested in topic maps?

I first saw this at Christophe Lalanne’s A bag of tweets / February 2013.

### Natural Language Meta Processing with Lisp

Sunday, February 24th, 2013

Natural Language Meta Processing with Lisp by Vsevolod Dyomkin.

From the post:

Recently I’ve started work on gathering and assembling a comprehensive suite of NLP tools for Lisp — CL-NLP. Something along the lines of OpenNLP or NLTK. There’s actually quite a lot of NLP tools in Lisp accumulated over the years, but they are scattered over various libraries, internet sites and books. I’d like to have them in one place with a clean and concise API which would provide easy startup point for anyone willing to do some NLP experiments or real work in Lisp. There’s already a couple of NLP libraries, most notably, langutils, but I don’t find them very well structured and also their development isn’t very active. So, I see real value in creating CL-NLP.

Besides, I’m currently reading the NLTK book. I thought that implementing the examples from the book in Lisp could be likewise a great introduction to NLP and to Lisp as it is an introduction to Python. So I’m going to work through them using CL-NLP toolset. I plan to cover 1 or 2 chapters per month. The goal is to implement pretty much everything meaningful, including the graphs — for them I’m going to use gnuplot driven by cgn of which I’ve learned answering questions on StackOverflow. I’ll try to realize the examples just from the description — not looking at NLTK code — although, I reckon it will be necessary sometimes if the results won’t match. Also in the process I’m going to discuss different stuff re NLP, Lisp, Python, and NLTK — that’s why there’s “meta” in the title.

Just in case you haven’t found a self-improvement project for 2013!

Seriously, this could be a real learning experience.

I first saw this at Christophe Lalanne’s A bag of tweets / February 2013.

### Lisp lore : a guide to programming the Lisp machine (1986)

Sunday, February 17th, 2013

Lisp lore : a guide to programming the Lisp machine (1986) by Hank Bromley.

From the introduction:

The full 11 -volume set of documentation that comes with a Symbolics lisp machine is understandably intimidating to the novice. “Where do I start?” is an oft-heard question, and one without a good answer. The eleven volumes provide an excellent reference medium, but are largely lacking in tutorial material suitable for a beginner. This book is intended to fill that gap. No claim is made for completeness of coverage — the eleven volumes fulfill that need. My goal is rather to present a readily grasped introduction to several representative areas of interest, including enough information to show how easy it is to build useful programs on the lisp machine. At the end of this course, the student should have a clear enough picture of what facilities exist on the machine to make effective use of the complete documentation, instead of being overwhelmed by it.

From the days when documentation was an expectation, not a luxury.

One starting place to decide if the ideas in a patent application are “new” or invented before a patent examiner went to college.

Some other Lisp content you may find of interest:

I first saw this at Christophe Lalanne’s “A bag of tweets / January 2013.”

### BigData using Erlang, C and Lisp to Fight the Tsunami of Mobile Data

Monday, November 26th, 2012

BigData using Erlang, C and Lisp to Fight the Tsunami of Mobile Data by Jon Vlachogiannis.

From the post:

BugSense, is an error-reporting and quality metrics service that tracks thousand of apps every day. When mobile apps crash, BugSense helps developers pinpoint and fix the problem. The startup delivers first-class service to its customers, which include VMWare, Samsung, Skype and thousands of independent app developers. Tracking more than 200M devices requires fast, fault tolerant and cheap infrastructure.

The last six months, we’ve decided to use our BigData infrastructure, to provide the users with metrics about their apps performance and stability and let them know how the errors affect their user base and revenues.

We knew that our solution should be scalable from day one, because more than 4% of the smartphones out there, will start DDOSing us with data.

A number of lessons to consider if you want a system that scales.

### 7 John McCarthy Papers in 7 weeks – Prologue

Sunday, October 21st, 2012

7 John McCarthy Papers in 7 weeks – Prologue by Carin Meier.

From the post:

In the spirit of Seven Languages in Seven Weeks, I have decided to embark on a quest. But instead of focusing on expanding my mindset with different programming languages, I am focusing on trying to get into the mindset of John McCarthy, father of LISP and AI, by reading and thinking about seven of his papers.

See Carin’s blog for progress so far.

I first saw this at John D. Cooks’s The Endeavor

How would you react to something similar for topic maps?

### Procedural Reflection in Programming Languages Volume 1

Saturday, April 14th, 2012

Procedural Reflection in Programming Languages Volume 1

Brian Cantwell Smith’s dissertation that is the base document for reflection in programming languages.

Abstract:

We show how a computational system can be constructed to “reason”, effectively and consequentially, about its own inferential processes. The analysis proceeds in two parts. First, we consider the general question of computational semantics, rejecting traditional approaches, and arguing that the declarative and procedural aspects of computational symbols (what they stand for, and what behaviour they engender) should be analysed independently, in order that they may be coherently related. Second, we investigate self-referential behaviour in computational processes, and show how to embed an effective procedural model of a computational calculus within that calculus (a model not unlike a meta-circular interpreter, but connected to the fundamental operations of the machine in such a way as to provide, at any point in a computation, fully articulated descriptions of the state of that computation, for inspection and possible modification). In terms of the theories that result from these investigations, we present a general architecture for procedurally reflective processes, able to shift smoothly between dealing with a given subject domain, and dealing with their own reasoning processes over that domain.

An instance of the general solution is worked out in the context of an applicative language. Specifically, we present three successive dialects of LISP: 1-LISP, a distillation of current practice, for comparison purposes; 2-LISP, a dialect constructed in terms of our rationalised semantics, in which the concept of elevation is rejected in favour of independent notions of simplification and reference, and in which the respective categories of notation, structure, semantics, and behaviour are strictly aligned; and 3-LISP, an extension of 2-LISP endowed with reflective powers. (Warning: Hand copied from an image PDF. Tying errors may have occurred.)

I think reflection as it is described here is very close to Newcomb’s notion of composite subject identities, which are themselves composed of composite subject identities.

Has me wondering what a general purpose identification language with reflection would look like?

### Common Lisp is the best language to learn programming

Tuesday, December 6th, 2011

Common Lisp is the best language to learn programming

From the post:

Now that Conrad Barski’s Land of Lisp (see my review on Slashdot) has come out, I definitely think Common Lisp is the best language for kids (or anyone else) to start learning computer programming.

Not trying to start a language war but am curious about two resources cited in this post:

Common Lisp HyperSpec

and,

Common Lisp the language, 2nd edition

My curiosity?

How would you map these two resources into a single topic map on Lisp?

Is there any third resource, perhaps the “Land of Lisp” that you would like to add?

Any blogs, mailing list posts, etc.?

Would that topic map be any different if you decided to add Scheme or Haskell to your topic map?

If this were a “learning lisp” resource for beginning programmers, how would you limit the amount of information exposed?

### newLISP® for Mac OS X, GNU Linux, Unix and Win32

Saturday, September 24th, 2011

newLISP® for Mac OS X, GNU Linux, Unix and Win32

From the website:

newLISP is a Lisp-like, general-purpose scripting language. It is especially well-suited for applications in AI, web search, natural language processing, and machine learning. Because of its small resource requirements, newLISP is also excellent for embedded systems applications. Most of the functions you will ever need are already built in. This includes networking functions, support for distributed and parallel processing, and Bayesian statistics.

At version 10.3.3, newLISP say that it has over 350 functions and is about 200K in size.

Interesting that one of the demo applications written in 2007 is MapReduce.

Some posts on its mailing lists but I would not call them high traffic.

### Common Lisp HyperSpec

Thursday, September 1st, 2011

Common Lisp HyperSpec

A hypertext version of the Common Lisp standard, along with issues resolved in its making.

Since functional programming discussions make reference to Lisp fairly often, you might want to bookmark this site.

### Beating the Averages

Friday, July 8th, 2011

Betting the Averages

Great summer reading for anyone who wants a successful startup or simply to improve an ongoing software company. (although the latter is probably the harder of the two tasks)

A couple of quotes to get you interested in reading more:

But I think I can give a kind of argument that might be convincing. The source code of the Viaweb editor was probably about 20-25% macros. Macros are harder to write than ordinary Lisp functions, and it’s considered to be bad style to use them when they’re not necessary. So every macro in that code is there because it has to be. What that means is that at least 20-25% of the code in this program is doing things that you can’t easily do in any other language. However skeptical the Blub programmer might be about my claims for the mysterious powers of Lisp, this ought to make him curious. We weren’t writing this code for our own amusement. We were a tiny startup, programming as hard as we could in order to put technical barriers between us and our competitors.

A suspicious person might begin to wonder if there was some correlation here. A big chunk of our code was doing things that are very hard to do in other languages. The resulting software did things our competitors’ software couldn’t do. Maybe there was some kind of connection. I encourage you to follow that thread. There may be more to that old man hobbling along on his crutches than meets the eye.

And consider his advice on evaluating competitors:

If you ever do find yourself working for a startup, here’s a handy tip for evaluating competitors. Read their job listings. Everything else on their site may be stock photos or the prose equivalent, but the job listings have to be specific about what they want, or they’ll get the wrong candidates.

During the years we worked on Viaweb I read a lot of job descriptions. A new competitor seemed to emerge out of the woodwork every month or so. The first thing I would do, after checking to see if they had a live online demo, was look at their job listings. After a couple years of this I could tell which companies to worry about and which not to. The more of an IT flavor the job descriptions had, the less dangerous the company was. The safest kind were the ones that wanted Oracle experience. You never had to worry about those. You were also safe if they said they wanted C++ or Java developers. If they wanted Perl or Python programmers, that would be a bit frightening– that’s starting to sound like a company where the technical side, at least, is run by real hackers. If I had ever seen a job posting looking for Lisp hackers, I would have been really worried.

This was written in 2003.

How would you update his advice on evaluating job descriptions at other startups?

### Structure and Interpretation of Computer Programs

Saturday, April 23rd, 2011

Structure and Interpretation of Computer Programs

From the website:

Structure and Interpretation of Computer Programs has been MIT’s introductory pre-professional computer science subject since 1981. It emphasizes the role of computer languages as vehicles for expressing knowledge and it presents basic principles of abstraction and modularity, together with essential techniques for designing and implementing computer languages. This course has had a worldwide impact on computer science curricula over the past two decades. The accompanying textbook by Hal Abelson, Gerald Jay Sussman, and Julie Sussman is available for purchase from the MIT Press, which also provides a freely available on-line version of the complete textbook.

These twenty video lectures by Hal Abelson and Gerald Jay Sussman are a complete presentation of the course, given in July 1986 for Hewlett-Packard employees, and professionally produced by Hewlett-Packard Television. The videos have been used extensively in corporate training at Hewlett-Packard and other companies, as well as at several universities and in MIT short courses for industry.

An introduction to computer programming, Lisp and the art of teaching the same.

### The Lisp Curse

Tuesday, April 19th, 2011

The Lisp Curse by Rudolf Winestock begins:

This essay is yet another attempt to reconcile the power of the Lisp programming language with the inability of the Lisp community to reproduce their pre-AI Winter achievements. Without doubt, Lisp has been an influential source of ideas even during its time of retreat. That fact, plus the brilliance of the different Lisp Machine architectures, and the current Lisp renaissance after more than a decade in the wilderness demonstrate that Lisp partisans must have some justification for their smugness. Nevertheless, they have not been able to translate the power of Lisp into a movement with overpowering momentum.

In this essay, I argue that Lisp’s expressive power is actually a cause of its lack of momentum.

Read the essay, then come back here. I’ll wait.

… … … …

At first blush, I thought about HyTime and its expressiveness. Or of topic maps. Could there be a parallel?

But non-Lisp software projects proliferate.

Let’s use http://sourceforge.net for examples.

Total projects for the database category – 906.

How many were written using Lisp?

 Lisp 1

Compared to:

 Java 282 C++ 106 PHP 298 Total: 686

That may not be fair.

Databases may not attract AI/Lisp programmers.

 Lisp 8 Schema 3 Total: 11

Compared to:

 Java 115 C++ 111 C 42 Total: 268

Does that mean that Java, C++ and C are too expressive?

Or that their expressiveness has retarded their progress in some way?

Or is some other factor is responsible for proliferation of projects?

And a proliferation of semantics.

*****
Correction: I corrected sourceforge.org -> sourceforge.net and made it a hyperlink. Fortunately sourceforge silently redirects my mistake in entering the domain name in a browser.