Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

December 21, 2010

Dirichlet Processes: Tutorial and Practical Course

Filed under: Bayesian Models,Dirichlet Processes — Patrick Durusau @ 4:50 pm

Dirichlet Processes: Tutorial and Practical Course Author: Yee Whye Teh
Slides
Paper

Abstract:

The Bayesian approach allows for a coherent framework for dealing with uncertainty in machine learning. By integrating out parameters, Bayesian models do not suffer from overfitting, thus it is conceivable to consider models with infinite numbers of parameters, aka Bayesian nonparametric models. An example of such models is the Gaussian process, which is a distribution over functions used in regression and classification problems. Another example is the Dirichlet process, which is a distribution over distributions. Dirichlet processes are used in density estimation, clustering, and nonparametric relaxations of parametric models. It has been gaining popularity in both the statistics and machine learning communities, due to its computational tractability and modelling flexibility.

In the tutorial I shall introduce Dirichlet processes, and describe different representations of Dirichlet processes, including the Blackwell-MacQueen? urn scheme, Chinese restaurant processes, and the stick-breaking construction. I shall also go through various extensions of Dirichlet processes, and applications in machine learning, natural language processing, machine vision, computational biology and beyond.

In the practical course I shall describe inference algorithms for Dirichlet processes based on Markov chain Monte Carlo sampling, and we shall implement a Dirichlet process mixture model, hopefully applying it to discovering clusters of NIPS papers and authors.

With the last two posts, that is almost 8 hours of video for streaming to your new phone or other personal device.

That should get you past even a Christmas day sports marathon at your in-laws house (or your own should they be visiting).

Graphical Models

Filed under: Bayesian Models,Dirichlet Processes,Graphical Models,Inference — Patrick Durusau @ 4:12 pm

Graphical Models Author: Zoubin Ghahramani

Abstract:

An introduction to directed and undirected probabilistic graphical models, including inference (belief propagation and the junction tree algorithm), parameter learning and structure learning, variational approximations, and approximate inference.

  • Introduction to graphical models: (directed, undirected and factor graphs; conditional independence; d-separation; plate notation)
  • Inference and propagation algorithms: (belief propagation; factor graph propagation; forward-backward and Kalman smoothing; the junction tree algorithm)
  • Learning parameters and structure: maximum likelihood and Bayesian parameter learning for complete and incomplete data; EM; Dirichlet distributions; score-based structure learning; Bayesian structural EM; brief comments on causality and on learning undirected models)
  • Approximate Inference: (Laplace approximation; BIC; variational Bayesian EM; variational message passing; VB for model selection)
  • Bayesian information retrieval using sets of items: (Bayesian Sets; Applications)
  • Foundations of Bayesian inference: (Cox Theorem; Dutch Book Theorem; Asymptotic consensus and certainty; choosing priors; limitations)

Start with this lecture before Dirichlet Processes: Tutorial and Practical Course

Powered by WordPress