A Performance Study of Data Mining Techniques: Multiple Linear Regression vs. Factor Analysis by Abhishek Taneja and R.K.Chauhan.
Abstract:
The growing volume of data usually creates an interesting challenge for the need of data analysis tools that discover regularities in these data. Data mining has emerged as disciplines that contribute tools for data analysis, discovery of hidden knowledge, and autonomous decision making in many application domains. The purpose of this study is to compare the performance of two data mining techniques viz., factor analysis and multiple linear regression for different sample sizes on three unique sets of data. The performance of the two data mining techniques is compared on following parameters like mean square error (MSE), R-square, R-Square adjusted, condition number, root mean square error(RMSE), number of variables included in the prediction model, modified coefficient of efficiency, F-value, and test of normality. These parameters have been computed using various data mining tools like SPSS, XLstat, Stata, and MS-Excel. It is seen that for all the given dataset, factor analysis outperform multiple linear regression. But the absolute value of prediction accuracy varied between the three datasets indicating that the data distribution and data characteristics play a major role in choosing the correct prediction technique.
I had to do a double-take when I saw “factor analysis” in the title of this article. I remember factor analysis from Schubert’s The judicial mind revisited : psychometric analysis of Supreme Court ideology, where Schubert used factor analysis to model the relative positions of the Supreme Court Justices. Schubert taught himself factor analysis on a Frieden rotary calculator. (I had one of those too but that’s a different story.)
The real lesson of this article comes at the end of the abstract: the data distribution and data characteristics play a major role in choosing the correct prediction technique.