Archive for the ‘Mutual Information Classifiers’ Category

What are the Differences between Bayesian Classifiers and Mutual-Information Classifiers?

Tuesday, May 3rd, 2011

What are the Differences between Bayesian Classifiers and Mutual-Information Classifiers?

I am sure we have all laid awake at night worrying about this question at some point. 😉

Seriously, the paper shows that Bayesian and mutual information classifiers compliment each other in classification roles and merits your attention.

Abstract:

In this study, both Bayesian classifiers and mutual information classifiers are examined for binary classifications with or without a reject option. The general decision rules in terms of distinctions on error types and reject types are derived for Bayesian classifiers. A formal analysis is conducted to reveal the parameter redundancy of cost terms when abstaining classifications are enforced. The redundancy implies an intrinsic problem of “non-consistency” for interpreting cost terms. If no data is given to the cost terms, we demonstrate the weakness of Bayesian classifiers in class-imbalanced classifications. On the contrary, mutual-information classifiers are able to provide an objective solution from the given data, which shows a reasonable balance among error types and reject types. Numerical examples of using two types of classifiers are given for confirming the theoretical differences, including the extremely-class-imbalanced cases. Finally, we briefly summarize the Bayesian classifiers and mutual-information classifiers in terms of their application advantages, respectively.

After detailed analysis, which will be helpful in choosing appropriate situations for the use of Bayesian or mutual information classifiers, the paper concludes:

Bayesian and mutual-information classifiers are different essentially from their applied learning targets. From application viewpoints, Bayesian classifiers are more suitable to the cases when cost terms are exactly known for trade-off of error types and reject types. Mutual-information classifiers are capable of objectively balancing error types and reject types automatically without employing cost terms, even in the cases of extremely class-imbalanced datasets, which may describe a theoretical interpretation why humans are more concerned about the accuracy of rare classes in classifications.