Machine Learning Throwdown: The Reckoning by Charles Parker.
From the post:
As you, our faithful readers know, we compared some machine learning services several months ago in our machine learning throwdown. In another recent blog post, we talked about the power of ensembles, and how your BigML models can be made into an even more powerful classifier when many of them are learned over samples of the data. With this in mind, we decided to re-run the performance tests from the fourth throwdown post using BigML ensembles as well as single BigML models.
You can see the results in an updated version of the throwdown details file. As you’ll be able to see, the ensemble of classifiers (Bagged BigML Classification/Regression Trees) almost always outperform their solo counterparts. In addition, if we update our “medal count” table tracking the competition among our three machine learning services, we see that the BigML ensembles now lead in the number of “wins” over all datasets:
Charles continues his comparison of machine learning services.
Charles definitely has a position. 😉
On the other hand, the evidence suggests a close look at your requirements, data and capabilities before defaulting to one solution or another.