OK, not immediately obvious why this is relevant to topic maps.
Nor is Bob Carpenter’s references:
I’ve been playing with all sorts of fun new toys at the new job at Columbia and learning lots of new algorithms. In particular, I’m coming to grips with Hamiltonian (or hybrid) Monte Carlo, which isn’t as complicated as the physics-based motivations may suggest (see the discussion in David MacKay’s book and then move to the more detailed explanation in Christopher Bishop’s book).
I suspect the two book references are:
- Pattern recognition and machine learning by Christoper M. Bishop (Christopher Bishop’s book)
- Information theory, inference, and learning algorithmsby David J. C. MacKay (David Mackay’s book)
but I haven’t asked. In part to illustrate the problem of resolving any entity reference. Both authors have authored other books touching on the same subjects so my guesses may or may not be correct.
Oh, relevance to topic maps. The technique automatic differentiation is used in Hamiltonian Monte Carlo methods for the generation of gradients. Still not helpful? Isn’t to me either.
Ah, what about Bayesian models in IR? That made the light go on!
I will be discussing ways to show more immediate relevance to topic maps, at least for some posts, in post #1000.
It isn’t as far away as you might think.