In this post, which remains a work in progress, we examine the performance of some popular open source optimization libraries that are intended for use in a derivative-free manner. We compare their ability to find minima of analytic functions with known properties. After forming some tentative views and shortlisting, we turn to their performance on log-likelihood functions arising in the estimation of parameters for a time series filter.
This exercise was intended to inform, in some small way, the choice of parameter optimization methods for time series algorithms. In order to address noisy, skewed, intermittent, lurching data that we see in real live instrumented processes, it is often necessary to use models whose parameters or hyper-parameters are analytically inconvenient (e.g. no derivatives). Sometimes automatic differentiation is possible, and that will be considered in a future post, but for now we assume that isn't the case.
Like any such analysis, this one is heavily influenced by the choice of problems tackled and we would encourage the reader to perform their own analysis in the domain of their choosing, before coming to any strong opinions. There are also many ways to modify behavior of any optimizer by choice of arguments. I have some back and forth with the authors of these packages and will seek to improve this study over time.
We used objective functions collated in the tuneup Python package. In turn, some of those objective functions are taken from the DEAP package, and comprise analytic functions suggested by various researchers (see the DEAP website for attributions). These functions have the advantage of fast computation. They are tabulated below.
These formulas have been specialized to three dimensions for clarity, though it should be clear from their definitions that they generalize.
A number of objective functions in the tuneup package are not, however, analytic functions of this sort. Instead they are negative log-likelihood functions arising in the estimation of model parameters for time series models. We will come to them in due course after shortlisting, because their computation time is significant.
We only included optimizers designed to work on non-differentiable functions. We also only included packages deployed on PyPI that installed, and whose "hello world" examples worked. Where that wasn't the case, we created an issue so as to increase the chances of including the package the next time around. We apologize for any omissions.
|Optimizer||Specific description||General description|
|Facebook Ax||Bayesian optimization||Ax is an accessible, general-purpose platform for understanding, managing, deploying, and automating adaptive experiments.|
|Optuna||Tree-structured Parzen Estimator||Automated search for optimal hyperparameters using Python conditionals, loops, and syntax.|
|Hyperopt||Tree-structured Parzen Estimator||Python library for serial and parallel optimization over awkward search spaces, which may include real-valued, discrete, and conditional dimensions.|
The simplicial homology global optimization technique. Powell's conjugate direction method.
|Provides several commonly used optimization algorithms.|
|PySOT||Surrogate optimization||Asynchronous parallel optimization toolbox for computationally expensive global optimization problems.|
|PyMoo||Differential evolution, brkga, nelder mead et al.||State of the art single and multi-objective optimization algorithms, and many more features related to multi-objective optimization, such as visualization and decision making.|
|Platypus||Genetic, evolutionary||Framework for evolutionary computing in Python with a focus on multiobjective evolutionary algorithms.|
The first part of the analysis attempted to get a handle on whether one optimizer was likely to result in significant computation savings versus another (assuming that the objective function optimization is the only cost). To this end, we set about converting the best function value found (i.e. minimum) into a simple score intended to be indicative of logarithmic computation time.
In the first phase, we ran each optimizer until a pre-set number of function evaluations (n) was reached. We varied n and computed the mean minimum found across all "good" optimizers (more on that in a moment). An example is shown for the Schwefel function.
|Maximum Evaluations||Mean Minimum Found||Maximum Evaluations||Mean Minimum Found|
As an example of applying this scale, suppose that one optimization run for Scipy.Powell (say) results in an objective function of 203.1. We would assign a score of 8, since it is better than 250.5 recorded for n=128, and all n below as it happens, but not better than 136.4 which is the average minimum found across all optimizers given a limit of 256 evaluations. The first phase was intended to create a scale of performance that is easily related to time.
A difficulty arose because, for some packages, the argument provided to the optimizer intended to limit the number of function evaluations is treated only as a rough guide. Indeed, by auditing the number of actual function evaluations, we discovered that this limit was often exceeded - a fact that might easily lead one to believe those optimizers were superior. To partially get around this problem, we introduced a backoff in the parameter said to govern the number of function evaluations. If the audited count exceeded n by more than 10%, we reduced the parameter by 10% and repeated. In this manner, we attempted to treat all algorithms on a roughly equal footing. However this only addressed the issue if the number of evaluations was close to the user setting. This did not solve the problem in some important cases, and we will return to this issue shortly.
A second comment is that the scoring scale, and the analysis itself, is not easy to interpret if optimizers have vastly differing abilities. For that reason, some algorithms were eliminated, and we shall return to those decisions in due course also.
Having established a scale of difficulty for each problem, we then fixed the number of function evaluations (using the same backoff procedure as above, where necessary). We ran each optimizer many times and reported the mean and standard deviation of the scores.
We tested the optimizers on objective functions with N=3 variables, at first. We increased to N=6 and N=20. It probably goes without saying that some approaches to global optimization are better placed than others to tackle the increase in dimension, whereas others may start to grind and chew on a lot of memory.
As an example, we compared between optuna, hyperopt, pysot and pattern (from PyZoo) in six dimensions, where N=225 function evaluations were permitted. The performance scale used n=64, 90, 128, 181, 256, 362, 512, 724, 1024 and 1448 function evaluations. So a difference of 1 in the table below indicates that minimum was better, or worse, by an amount that might be commensurate with a 40% increase in computation time (with higher scores better). As you can see, the differences are indeed large from a practical perspective.
These preliminary results suggested that PySOT, and perhaps surrogate optimization in general, was well suited to the analytic functions - not altogether surprising. It suggests that considerable computation time might be saved by using it over, say, hyperopt whose performance was concerning.
In this discussion, a naughty optimizer is one that does not limit function evaluations to precisely what the user might desire. As an aside, this isn't a critique. Often the optimizers cycle through parameters, or perform some other subroutine that involves multiple function evaluations - so they only stop periodically to see whether the user-supplied maximum has been exceeded. It may not be sensible to do otherwise, depending on the details of the algorithm.
We tested all optimizers to determine whether this kind of behavior was present, and whether it warranted special treatment or merely a small hack. While it would be tempting to disqualify them, we instead attempted what might be considered a slightly one-sided analysis. To evaluate these optimizers, we ran them first and recorded an audited number of function evaluations (that is to say the actual number of times an objective function was called, using our own monitoring). This was then used when running their better behaved brethren. This way, all optimizers in the comparisons to follow use the same total number of function evaluations.
For example, when Scipy's SHGO is told to use 100 function evaluations, it actually used an average of 315 evaluations (plus or minus a standard deviation of 200 across problems). Nonetheless, as noted, all solvers in the table were given the same number of function evaluations during each pass (however many SHGO choose to use). The table below should be treated with care as it isn't comparing the optimizers on a truly equal footing, but rather with a choice of the number of function evaluations that suits SHGO.
Lower is better ...
Similarly, when Scipy's Powell is supplied maxfev=100, say, it will first perform individual line searches in every dimension (each one may require many function evaluations) and move through all dimensions before performing a check to determine whether the function evaluation limit is exceeded. This makes it difficult to control time or compute for high dimensional problems. For example, the earliest possible stopping occurs between 50 and 156 function evaluations in, when the problem is six dimensional (for 20 dimensional problems it may take close to 600). Despite this limitation, Powell's method can be very effective for some types of problems as suggested by the table below listing minimums found. Lower is better again.
Given that Powell is being allowed to run until it is happy, we have also highlighted in red the poorest performing optimizer from the choice HyperOpt, PySOT and Optuna.
The analysis above did not include all algorithms available to us in the mentioned libraries. Here we present some additional comparisons which dictated these choices. In particular, we only selected the "pattern" algorithm from the PyMoo library, and some reasons are provided in the data to follow. We did not use any algorithms from the Platypus library, as initial results suggested they were uniformly worse than alternatives.
The evolutionary algorithm in the Platypus library sometimes runs as many function evaluations as one requests, but often runs a few more, or roughly 100 more. We did not go to great lengths to investigate this behavior but again, provide a somewhat fair comparison against the more predictable optimizers. Mean and standard deviation of minimums are shown here. Lower is better.
At least for these examples, there seems to be no compelling reason to use Platypus' evolutionary algorithm over, say, PySOT, if you think your problems' geometry might be in any way reflective of the (somewhat artificial) analytic functions we use.
Again, performance of Platypus' genetic algorithm is dominated by PySOT, and also optuna. Lower is better. This doesn't eliminate the possibility of a genetic search serving a purpose elsewhere.
PyMoo provides numerous algorithms. The library intent is multi-objective optimization, so the single objective results here may not be too relevant. It isn't fair to judge a multi-objective algorithm by its single objective performance. However, we compared all the methods below. We included those that reliably stopped after roughly 100 function evaluations (as an example of an exception, brkga algorithm ran to 1000). We included PySOT as a reference given its success above.
|rastrigin (scaled)||94.8+/- 38.5||184.3+/-62.8||59.4+/-65.8||38.9+/- 12.5||100.2+/-193.3||37.4+/-13.5|
|rastrigin||32.9+/- 15.8||51.1+/- 9.0||38.4+/- 13.4||24.1+/- 10.4||34.1+/- 15.5||25.0+/-16.5|
Increasing the dimensionality didn't materially change this assessment. Pattern search was still the front-runner.
|898+/- 352||1399+/- 187||1104+/- 295||1041+/- 182||853+/- 238|
|rastrigin||24.1+/- 11.7||29.3+/- 14.0||51.8+/- 8.7||32.5+/- 15.2||16.8+/- 13.2||41.1+/- 10.6|
|rastrigin (skew)||29.8+/- 6.9||46.4+/- 17.4||83.1+/- 25.4||126+/- 135||45.1+/- 18.3||120+/- 128|
|bohachevsky||101+/- 109||177+/- 134||9954+/- 4299||3552+/- 4811||4.4+/-1.1||3220+/- 4435|
|griewank||1.1+/-0.2||2.3+/-1.3||41.6+/- 11.3||9.8+/- 10.7||0.7+/-0.2||8.2+/-7.0|
|schaffer||23.5+/- 8.4||33.9+/- 6.4||35.6+/- 3.0||45.5+/- 6.2||23.3+/- 7.2||34.8+/- 11.3|
|rastrigin (scaled)||38.3+/-9||92.3+/- 35.6||215+/- 51.7||93.8+/- 83.4||40.0+/- 8.5||151+/- 340|
Given a larger number of function evaluations, the picture is somewhat murkier and nsga3 and ctaea seemed to be more competitive.
|bohachevsky||11.3+/- 6.3||7.2+/-2.5||4526+/- 1945||0.9+/- 0.3||3.7+/- 0.7||0.0+/-0.0|
|rastrigin (scaled)||13.8+/- 7.3||8.9+/-4.2||121+/- 23.7||59.6+/- 34.8||25.7+/- 8.6||9.6+/-8.3|
|schwefel||775+/- 224||906+/- 139||941+/- 167||858+/- 271||639+/- 182||673+/- 274|
|schaffer||34.0+/- 9.9||33.5+/- 13.8||31.1+/- 2.2||33.4+/- 3.3||11.0+/- 4.7||18.1+/- 11.9|
|rastrigin (skew)||12.7+/- 8.6||12.2+/- 8.5||72.5+/- 21.5||36.6+/-11.6||26.6+/- 9.3||21.8+/-9.2|
|rastrigin||17.1+/- 12.4||13.9+/- 9.9||38.3+/- 6.0||29.1+/- 10.7||8.4+/-6.6||23.1+/-9.4|
When both dimension and the number of evaluations are increased, the performance of pattern seemed once again to be clearly superior to its colleagues in the PyMoo package. It was not obviously better or worse than PySOT.
|schwefel||3377+/- 496||3412+/- 703||3793+/- 230||2319+/- 467||4637+/- 100||2859+/- 350|
|bohachevsky||750+/- 708||30.2+/- 5.5||3277+/-2514||28.2+/- 4.1||51454+/-8815||919+/-1092|
|griewank||3.9+/-1.0||1.0+/-0.0||24.9+/- 17.0||1.0+/-0.0||159+/- 30.0||1.9+/-0.2|
|rastrigin||112+/- 18.8||55.4+/- 24.8||115+/- 20.9||61.2+/- 9.7||187+/- 27.7||101+/-18.5|
|rastrigin (skew)||73.9+/- 4.0||137+/- 12.0||166+/- 39.5||81.7+/- 12.0||549+/- 144||71.9+/-34.6|
|rastrigin (scaled)||156+/- 142||163+/- 41.2||589+/- 86.1||65.1+/- 10.1||854+/- 100||143+/-55.0|
|schaffer||141+/- 28.1||79.2+/- 32.3||151+/- 7.2||98.6+/- 12.7||164+/- 4.6||153+/-10.6|
As a minor note, Facebook's ax-platform was a late scratching as the current build on PyPI was broken - but it will be included soon. It did subsequently build directly from github but not in time for these results. It will be included soon.
To provide a different kind of challenge to the optimizers, we supplied them with a negative log-likelihood function for a model with five parameters. The parameters govern the action of a Kalman-like filter. The calculation is performed over 1000 data points in a time series. In order to generate a variety of different objective functions, we considered different combinations of three parameters to vary, while fixing two. The intent was to move away from somewhat artificially constructed analytic functions in favor of something that might arise in the wild. It must be said, however, that performance on this particular problem may not be indicative of performance elsewhere.
We minimized negative log-likelihood for a time series model. As anticipated, SHGO typically did a very good job, although, when instructed to limit evaluation to 225, it performed an average of 860 evaluations, plus or minus 300 or so. This was the only bad egg included in this particular comparison, as hyperopt, pysot, pattern and optuna could be relied upon to spend exactly the same number of evaluations. Bearing in mind that lower is better here, SHGO and pattern seem initially to be very reliable.
|expnorm (varying g1,logK,loc)||1205+/-0.0||1208+/-2.2||1206+/-0.5||1476+/-0.6|
|expnorm (varying g2,logK,logScale)||1154+/-0.8||1205+/-66.7||1195+/-101||1561+/-1.0|
|expnorm (varying logK,loc,logScale)||1472+/-3.9||1474+/-5.0||1471+/-0.7||1477+/-2.0|
|expnorm (varying g1,g2,loc)||1152+/-1.5||1158+/-13.4||1168+/-24.7||1723+/-1.5|
|expnorm (varying g1,loc,logScale)||1205+/-0.0||1216+/-11.4||1205+/-0.3||1487+/-0.3|
|expnorm (varying g2,loc,logScale)||1154+/-0.1||1193+/-63.6||1188+/-111||1594+/-1.1|
|expnorm (varying g1,logK,logScale)||1205+/-0.1||1219+/-14.2||1209+/-5.0||1478+/-3.5|
|expnorm (varying g2,logK,loc)||1154+/-0.3||1156+/-1.7||1155+/-2.0||1560+/-0.8|
|expnorm (varying g1,g2,logScale)||1151+/-0.0||1203+/-22.1||1170+/-22.2||1596+/-1.4|
|expnorm (varying g1,g2,logK)||1151+/-0.0||1168+/-20.4||1170+/-23.9||1561+/-1.0|
For relatively low-dimensional problems, the scipy global optimization methods are certainly worthy of consideration. We are intrigued by the performance of scipy.shgo, for example, a relatively recent addition grounded in homology. However, the method appears to consume considerable resources in higher dimensional settings, and the inability to control the number of function evaluations could be awkward in some settings. Similarly, Scipy's implementation of Powell's method is not to be underestimated, though it presents a similar difficulty, and PyMoo's implementation of pattern is strong, too.
That said, we make no recommendation for one software suite over another, if for no other reason that some are intended to facilitate mixing and matching of algorithms. Performing a fastidious assessment is less well motivated in those cases, and invites a combinatorial explosion.
The reader will have their own needs. There will be cases where a search must be performed quickly, using 20 guesses say, and others where we can afford 1,000 evaluations or more. Our analysis was steered toward cases where function evaluation is particularly expensive.
The reader may prefer one package over another for functionality provided (such as multi-objective optimization, multi-fidelity search or moving peak search) or preference for one design style over another (simple function calls versus classes, method of providing constraints, and so forth). It seems unfair to judge some packages on out of the box single objective optimization when their intended contribution is broader.
We welcome pull requests to the tuneup Python package, used for this analysis, and once again apologize for any omissions. We hope those of you developing time series algorithms for the prediction network try several of the approaches mentioned, as it is difficult to anticipate in advance which will serve your needs the best.
For an example of using optimization in a crawler, see FitCrawler.