We apply three different techniques of hyperparameter optimization on a classification model set to compare their accuracy.
The aim of this article is then to investigate the different available optimization techniques, and test them on a simple example, compare them and see an overview of the improvements obtained.
Hyperparameters are model adjustable parameters that must be tuned to obtain a model with optimal performance. Then, optimizing the hyperparameters of a model is a crucial task to increase the performance of the selected algorithm.
We need to know, to some extent, the implication that each hyperparameter has in each algorithm and its possible values. Understanding in-depth the meaning of each of these hyperparameters in the different algorithms is something necessary and a huge task that sometimes implies knowing the way the algorithm works internally and the mathematics behind it. The content of this article does not reach that depth, although we will use different algorithms to analyze by selecting some hyperparameters in each of them.
Anyone who has used any algorithm has probably already made some manual optimization attempts on the default set of values. This manual adjustment usually takes a long time, is not always done rigorously and makes it difficult to systematize the results
Secondly, we can apply available automated and quite simple techniques such as Grid Search and Random Search, which usually give better results, but with a high cost of time and machine computation. We will apply both techniques to compare their results
Finally, we will apply Bayesian optimization, which is a method to find the minimum of a function, using the hyperopt library of Python on the best of the tested algorithms. The implementation of this technique may not be so easy, but it can give us better results in performance or time than the previous ones.