 Research Paper
 Open Access
 Published:
Effective hyperparameter optimization using NelderMead method in deep learning
IPSJ Transactions on Computer Vision and Applications volume 9, Article number: 20 (2017)
Abstract
In deep learning, deep neural network (DNN) hyperparameters can severely affect network performance. Currently, such hyperparameters are frequently optimized by several methods, such as Bayesian optimization and the covariance matrix adaptation evolution strategy. However, it is difficult for nonexperts to employ these methods. In this paper, we adapted the simpler coordinatesearch and NelderMead methods to optimize hyperparameters. Several hyperparameter optimization methods were compared by configuring DNNs for character recognition and age/gender classification. Numerical results demonstrated that the NelderMead method outperforms the other methods and achieves stateoftheart accuracy for age/gender classification.
1 Introduction
The evolution of deep neural networks (DNNs) has dramatically improved the accuracy of character recognition [1], object recognition [2, 3], and other tasks. However, the their increasing complexity increases the number of hyperparameters, which makes tuning of hyperparameters an intractable task.
Traditionally, DNN hyperparameters are adjusted using manual search, grid search, or random search [4]. However, search space expands exponentially relative to the number of hyperparameters; thus, such naive methods no longer work well. Therefore, more sophisticated hyperparameter optimization methods are required.
In deep learning, a hyperparameter optimization problem can be formulated as a stochastic black box optimization problem to minimize a noisy black box objective function f(x):
Here, using all information available about the objective function, we can obtain its value at point x with noise ε as follows:
This means that no analytical properties of the objective function, e.g., its derivatives, can be optimized. In addition, a loss function of the target DNN is typically chosen as f(x), and its evaluation cost is so expensive that training and testing of the DNN is required. The search space χ comprises combinations of multiple conditions such as real numbers, integers, and categories.
Currently, Bayesian optimization [5] and the covariance matrix adaptation evolution strategy (CMAES) [6] are considered the most promising methods for DNN hyperparameter optimization, and their optimization ability has been proven experimentally [7–10]. However, Bayesian optimization has some hyperparameters that significantly affect its optimization performance, e.g., choices of its kernel and acquisition function. Moreover, maximizing a nonconvex acquisition function is required for each iteration of the optimization process. On the other hand, CMAES requires several populations and generations for sufficient performance. Although such calculations can be parallelized easily, significant computing resources are required.
It is evident that simple classical manual search, grid search, and random search remain common; thus, we consider that most people are unwilling to adjust the hyperparameters of a difficult optimization method or implement the method and do not have sufficient computing resources to optimize DNN hyperparameters.
In this paper, we describe simple substitutional methods, i.e., the coordinatesearch and NelderMead methods, for hyperparameter optimization in deep learning. To the best of our knowledge, no report has examined the application of these methods to hyperparameter optimization.
Our numerical results indicate that these methods are more efficient than other wellknown methods. In particular, the NelderMead method is the most effective for deep learning.
2 Related work
2.1 Random search
Random search is one of the simplest ways to optimize DNN hyperparameters. This method iteratively generates hyperparameter settings and evaluates the objective function. Random search has excellent parallelization and can handle integer and categorical hyperparameters naturally. Bergstra and Bengio demonstrated that random search outperforms a manual search by a human expert and grid search [4].
2.2 Bayesian optimization
Bayesian optimization is one of the most remarkable hyperparameter optimization methods in recent years. Its base concept was proposed in the 1970s; however, it has been significantly improved since then due to the attention paid to DNN hyperparameter optimization.
There are several variations of Bayesian optimization, e.g., Gaussian process (GP)based variation [11], Treestructured Parzen Estimator (TPE) [7], and Deep Networks for Global Optimization (DNGO) [12]. The most standard one is the GPbased variation.
GPbased Bayesian optimization is shown in Algorithm 1.
In this method, we assume that an objective function follows the GP specified by its mean function m and kernel k:
For simplicity, we assume m(x)=0. Then, we must consider the kernel k(x , x ^{′}). For the kernel, an automatic relevance determination (ARD) squared exponential (SE) kernel
or ARD Matérn 5/2 kernel
is commonly used in Bayesian optimization [8]. Here, θ _{0}, θ _{1}, …, and θ _{ D } are the kernel’s hyperparameters.
Once k(x,x ^{′}) is determined, we can predict information about a new sample point x _{ t+1} from previous observations \(\mathcal {D}_{1:t} = \{\mathbf {x}_{1:t}, y_{1:t}\}\):
The remaining problem is how to determine new sample points iteratively. To determine new candidates for a sample, we generally employ an acquisition function. Here, it is necessary to select an acquisition function that achieves a good balance between exploration and exploitation based on past observations. One wellknown acquisition function is expected improvement (EI):
The point that maximizes the acquisition function becomes a new sample point. Although maximizing the nonconvex acquisition function is difficult, the evaluation cost of the function is considerably less than that of the original objective function. Therefore, it is easier to handle than the original problem.
Practically, Bayesian optimization is combined with random search to collect initial observation data.
Bergstra et al. and Snoek et al. performed several computational experiments. The results demonstrated that Bayesian optimization outperforms manual search by a human expert and random search [7, 8].
2.3 Covariance matrix adaptation evolution strategy
While Bayesian optimization has been developed in the machine learning community, CMAES has been developed in the optimization community. CMAES is a type of evolutionary computation that demonstrates outstanding performance in benchmarks as a stateoftheart black box optimization method [13].
The (μ _{W}, λ)CMAES [6] is shown in Algorithm 2. It conducts a weighted recombination from the μ best out of λ individuals. The procedure is explained as follows.

(i)
Initialize the mean \(\langle \mathbf {x} \rangle _{\mathrm {w}}^{(0)}\) and the standard deviation σ ^{(0)} of individuals. Set the evolution path \(\mathbf {p}_{\mathrm {c}}^{(0)} = \mathbf {p}_{\sigma }^{(0)} = \mathbf {0}\) and the covariance matrix C ^{(0)}=I.

(ii)
Generate g+1 generation individuals \(\mathbf {x}_{k}^{(g+1)} \; (k = 1,\dots,\lambda)\):
$$\begin{array}{@{}rcl@{}} &&\mathbf{x}^{(g + 1)}_{k} = {\langle \mathbf{x} \rangle}_{\mathrm{w}}^{(g)} + \sigma^{(g)}\mathbf{B}^{(g)}\mathbf{D}^{(g)}\mathbf{z}_{k}^{(g + 1)},\\ &&\text{where\ } \begin{aligned} &{\langle \mathbf{x} \rangle}_{\mathrm{w}}^{(g)} := \frac{1}{\sum_{i = 1}^{\mu} w_{i}}\sum_{i = 1}^{\mu} w_{i}\mathbf{x}_{i:\lambda}^{(g)},\\ &\mathbf{B}^{(g)}\mathbf{D}^{(g)}\left(\mathbf{B}^{(g)}\mathbf{D}^{(g)}\right)^{T} = \mathbf{C}^{(g)},\\ &\mathbf{z}_{k}^{(g + 1)} \sim \mathcal{N}(\mathbf{0}, \mathbf{I}). \end{aligned} \end{array} $$(10)Here, w _{1},…,w _{ μ } are weights, and i : λ denotes the ith best individual.

(iii)
Update the evolution path \(\mathbf {p}^{(g)}_{\mathrm {c}}\) and the covariance matrix C ^{(g)}:
$$\begin{array}{@{}rcl@{}} {}&&\mathbf{p}^{(g+1)}_{\mathrm{c}} = (1\! \! c_{\mathrm{c}}) \mathbf{p}_{\mathrm{c}}^{(g)} \!+ c_{\mathrm{c}}^{\mathrm{u}} c_{\mathrm{w}} \mathbf{B}^{(g)}\mathbf{D}^{(g)} {\langle \mathbf{z} \rangle}_{\mathrm{w}}^{(g + 1)}\!, \end{array} $$(11)$$\begin{array}{@{}rcl@{}} {}&&\mathbf{C}^{(g+1)} = (1\! \! c_{\text{cov}}) \mathbf{C}^{(g)} \!+ c_{\text{cov}} \mathbf{p}^{(g + 1)}_{\mathrm{c}}\left(\mathbf{p}_{\mathrm{c}}^{(g + 1)}\right)^{T}\!\!,\\ {}&&\text{where}\ \begin{aligned} &c_{\mathrm{c}}^{\mathrm{u}} := \sqrt{c_{\mathrm{c}}(2  c_{\mathrm{c}})},\\ &c_{\mathrm{w}} := \frac{\sum_{i = 1}^{\mu} w_{i}}{\sqrt{\sum_{i = 1}^{\mu} w_{i}^{2}}},\\ &{\langle \mathbf{z} \rangle}_{\mathrm{w}}^{(g + 1)} := \frac{1}{\sum_{i = 1}^{\mu} w_{i}}\sum_{i = 1}^{\mu} w_{i} \mathbf{z}_{i : \lambda}^{(g+1)}. \end{aligned} \end{array} $$(12)Here, c _{c} and c _{cov} are hyperparameters.

(iv)
Update the evolution path \(\mathbf {p}^{(g)}_{\sigma }\) and the step size σ ^{(g)}:
$$\begin{array}{@{}rcl@{}} &&\mathbf{p}_{\sigma}^{(g + 1)} = (1  c_{\mathrm{\sigma}}) \mathbf{p}_{\sigma}^{(g)} + c_{\mathrm{\sigma}}^{\mathrm{u}} c_{\mathrm{w}} \mathbf{B}^{(g)} {\langle \mathbf{z} \rangle}_{\mathrm{w}}^{(g+1)}, \end{array} $$(13)$$\begin{array}{@{}rcl@{}} &&\sigma^{(g + 1)} = \sigma^{(g)} \exp\left(\frac{1}{d_{\mathrm{\sigma}}} \frac{\\mathbf{p}_{\mathrm{\sigma}}^{(g + 1)}  \hat{\mathbf{\chi}}_{\mathrm{n}}\}{\hat{\mathbf{\chi}}_{\mathrm{n}}} \right),\\ &&\text{where\ } \begin{aligned} &c_{\mathrm{\sigma}}^{\mathrm{u}} := \sqrt{c_{\mathrm{\sigma}} (2  c_{\mathrm{\sigma}})},\\ &\hat{\mathbf{\chi}}_{\mathrm{n}} = \mathbb{E}[\\mathcal{N}(\mathbf{0}, \mathbf{I})\]. \end{aligned} \end{array} $$(14)Here, c _{σ} and d _{σ} are hyperparameters.
Details about the hyperparameters used to update CMAES parameters are provided in the literature [6].
Since evaluations of each individual for each generation can be calculated simultaneously, CMAES can be parallelized easily. Watanabe and Le Roux and Loshchilov et al. demonstrated that CMAES outperforms manual search by a human expert and Bayesian optimization in certain cases [9, 10].
3 Coordinatesearch and NelderMead methods
In the previous section, we introduced the random search, Bayesian optimization, and CMAES methods. Note that the achievements of these methods have already been proven experimentally, and the results indicate that the latter two methods are very promising and considered superior to random search. However, both Bayesian optimization and CMAES have many hyperparameters related to their optimization performance. To set these hyperparameters appropriately, it is necessary to have sufficient knowledge about the given method. In addition, Bayesian optimization must maximize its nonconvex acquisition function, and CMAES requires significant computing resources to exploit its advantages. These factors make it difficult for nonexperts to utilize these methods.
In this section, we introduce two optimization methods, coordinatesearch and NelderMead, that are easy to implement. These methods have fewer hyperparameters to adjust and are practically usable with fewer computing resources.
3.1 Mathematical concepts
Before introducing the methods, we define some required mathematical concepts here.
Definition 1
The positive span of a set of vectors \([\mathbf {v}_{1} \cdots \mathbf {v}_{r}] \in \mathbb {R}^{n}\) is the convex cone:
Definition 2
A positive spanning set in \(\mathbb {R}^{n}\) is a set of vectors whose positive span is \(\mathbb {R}^{n}\).
Definition 3
The set \([\!\mathbf {v}_{1} \cdots \mathbf {v}_{r}] \in \mathbb {R}^{n}\) is considered positively dependent if one of the vectors is in the positive span of the remaining vectors; otherwise, the set is considered positively independent.
Definition 4
A positive basis for \(\mathbb {R}^{n}\) is a positively independent set whose positive span is \(\mathbb {R}^{n}\). A positive basis for \(\mathbb {R}^{n}\) that has n+1 vectors is considered a minimal positive basis and a positive basis that has 2n vectors is considered a maximal positive basis (Fig. 1). Here, a maximal positive basis is denoted as D _{⊕}.
Definition 5
A simplex of dimension m is a convex hull of an affinely independent set of points Y={y ^{0},y ^{1},…,y ^{m}}.
3.2 Coordinatesearch method
The coordinatesearch method [14] (Algorithm 3, Fig. 2) is one of the simplest direct search methods. It minimizes its objective function iteratively using the maximal positive basis D _{⊕}=[I −I]=[e _{1} ⋯ e _{ n } −e _{1} ⋯ −e _{ n }].
This method performs a poll step iteratively to search a better solution and updates parameters to adjust its learning rate.

(i)
Poll step: Order the poll set P _{ k }={x _{ k }+α _{ k } d:d∈D _{⊕}}. Evaluate f at the poll points in order. If a poll point x _{ k }+α _{ k } d _{ k } that satisfies the condition f(x _{ k }+α _{ k } d _{ k })<f(x _{ k }) is found, then stop polling, set x _{ k+1}=x _{ k }+α _{ k } d _{ k }, and declare the iteration and poll step successful.

(ii)
Parameter update: If iteration k succeeds, then set α _{ k + 1} = α _{ k } (or α _{ k+1} = 2α _{ k }). Otherwise, set α _{ k + 1} =α _{ k }/2.
When the step size becomes sufficiently small, the search is terminated. Note that the evaluation of functions in the poll step can be parallelized.
This method deteriorates the performance for search ranges with different scales; thus, in this study, we normalize parameters to [ 0,1] in our computational experiments. In addition, we adopt the updating rule α _{ k+1}=2α _{ k } on iteration success and order the vectors of the poll set randomly for each iteration.
3.3 NelderMead method
The NelderMead method [14, 15] (Algorithm 4, Fig. 3) is an optimization method that uses a simplex proposed by Nelder and Mead. Gilles et al. applied this method for the hyperparameter tuning problem in support vector machine modeling. They demonstrated that the method can find very good hyperparameter settings reliably for support vector machines [16]. Currently, the NelderMead method is not considered in DNN research; however, it has a long history and many achievements in other research areas [14]. Thus, we think it is worth considering the NelderMead method for DNN hyperparameter optimization. In the study by Gilles et al., their SVM has only two hyperparameters. On the other hand, DNNs often have more than 10 times number of hyperparameters. So, our task is more challenging.
The NelderMead method minimizes the objective function by repeating its evaluation at each vertex of the simplex and by replacing points according to the following procedure (Figs. 4 and 5).

(i)
Order: Order the n+1 vertices Y={y ^{0},y ^{1},…,y ^{n}} as follows:
$$ f^{0} = f(\mathbf{y}^{0}) \leq f^{1} = f(\mathbf{y}^{1}) \leq \cdots \leq f^{n} = f(\mathbf{y}^{n}). $$ 
(ii)
Reflect: Reflect the worst vertex y ^{n} over the centroid \(\mathbf {y}^{c} = \sum _{i = 0}^{n  1}\mathbf {y}^{i} / n\) of the remaining n vertices:
$$ \mathbf{y}^{r} = \mathbf{y}^{c} + \delta^{r}(\mathbf{y}^{c}  \mathbf{y}^{n}). $$Evaluate f ^{r}=f(y ^{r}). If f ^{0}≤f ^{r}<f ^{n−1}, then replace y ^{n} with the reflected point y ^{r} and terminate iteration k: Y _{ k+1}={y ^{0},y ^{1},…,y ^{n−1},y ^{r}}.

(iii)
Expand: If f ^{r}<f ^{0}, calculate:
$$ \mathbf{y}^{e} = \mathbf{y}^{c} + \delta^{e} (\mathbf{y}^{c}  \mathbf{y}^{n}) $$and evaluate f ^{e}=f(y ^{e}). If f ^{e}≤f ^{r}, then replace y ^{n} with the expansion point y ^{e} and terminate iteration k: Y _{ k+1}={y ^{0},y ^{1},…,y ^{n−1},y ^{e}}. Otherwise, replace y ^{n} with the reflected point y ^{r} and terminate iteration k: Y _{ k+1}={y ^{0},y ^{1},…,y ^{n−1},y ^{r}}.

(iv)
Contract: If f ^{r}≥f ^{n−1}, then a contraction is performed between the best of y ^{r} and y ^{n}.

(a)
Outside contraction: If f ^{r}<f ^{n}, perform an outside contraction:
$$ \mathbf{y}^{oc} = \mathbf{y}^{c} + \delta^{oc}(\mathbf{y}^{c}  \mathbf{y}^{n}) $$and evaluate f ^{oc}=f(y ^{oc}). If f ^{oc}≤f ^{r}, then replace y ^{n} with the outside contraction point \(\mathbf {y}^{oc}_{k}\) and terminate iteration k: Y _{ k+1}={y ^{0},y ^{1},…,y ^{n−1},y ^{oc}}. Otherwise, perform a shrink.

(b)
Inside contraction: If f ^{r}≥f ^{n}, perform an inside contraction:
$$ \mathbf{y}^{ic} = \mathbf{y}^{c} + \delta^{ic}(\mathbf{y}^{c}  \mathbf{y}^{n}) $$and evaluate f ^{ic}=f(y ^{ic}). If f ^{ic}<f ^{n}, then replace y ^{n} with the inside contraction point y ^{ic} and terminate iteration k: Y _{ k+1}={y ^{0},y ^{1},…,y ^{n−1},y ^{ic}}. Otherwise, perform a shrink.

(a)

(v)
Shrink: Evaluate f at the n points y ^{0}+γ ^{s}(y ^{i}−y ^{0}),where i=1,…,n, replace y ^{1},…,y ^{n} with these points, and terminate iteration k: Y _{ k+1}={y ^{0}+γ ^{s}(y ^{i}−y ^{0}), i=0,…,n}.
Here, γ ^{s}, δ ^{ic}, δ ^{oc}, δ ^{r}, and δ ^{e} are constant hyperparameters usually taking the following values:
Note that each step of an iteration, e.g., initialization and shrink operations, can be parallelized easily.
4 Poor hyperparameter setting detection
DNNs are very sensitive to hyperparameter settings. As a result, training can fail simply because some hyperparameters, e.g., the learning rate, are slightly inappropriate. When appropriate hyperparameter values are given, training loss is reduced in each iteration (Fig. 6, top graph); otherwise, regardless of how many iterations have been executed, training loss is not reduced (Fig. 6, bottom graph).
The advantage of human experts is that they can detect training failures and terminate them at an early stage. Domhan et al. proposed a method that accelerates hyperparameter optimization methods by detecting and terminating such training failures using learning curve prediction [17]. In addition, Klein et al. proposed a specialized Bayesian neural network to model DNN learning curves [18, 19]. We apply Algorithm 5 to detect training failures at an early stage.
Note that this method does not optimize hyperparameters directly, but accelerates a hyperparameter optimization method. If a large number of training iterations with poor hyperparameter settings appear in the optimization process, this detection process improves the execution time of the optimization method.
In our experiments, we apply this method to all hyperparameter optimization methods with n equaling 10% of the training iterations and t equaling 0.8. These values are chosen based on experience. As can be seen in Fig. 6, the learning curve of poor hyperparameter settings is distinctive and easy to detect; thus, there is no need to be too careful to decide n and t.
5 Numerical results
We perform computational experiments to optimize real and integer hyperparameters in combination with various datasets, tasks, and convolutional neural networks (CNNs) to compare the performance of the random search, Bayesian optimization, CMAES, coordinatesearch, and NelderMead methods.
The experimental settings for each method are given in Table 1. We use the first 100 random search evaluations to initialize the Bayesian optimization and coordinatesearch methods. The number of evaluations and initialization parameters of CMAES and Bayesian optimization are determined with reference to the literature [10]. We implement CMAES using Distributed Evolutionary Algorithms in Python (DEAP) [20], which is an evolutionary computation framework. In addition, for optimization methods that cannot handle integer values directly, integer hyperparameters are handled as continuous values and rounding is performed when evaluating the objective function.
5.1 MNIST
The LeNet [1] hyperparameters are optimized by the five methods to measure their performance. This network performs a 10class classification of the MNIST handwritten digit database [21] (Fig. 7). Here, we use Caffe’s tutorial implementation [22, 23]. This implementation uses a rectified linear unit [24] as its activation function rather than the sigmoid used in the original LeNet.
These methods are also applied to the optimization of hyperparameters of the BatchNormalized Maxout Network in Network proposed by Chang et al. [25]. Note that this network is deeper and has many more hyperparameters to optimize than LeNet.
Tables 2, 3, 4, 5, 6, and 7 show the details of each network, the fixed hyperparameters, the optimized hyperparameters, and the search ranges. Note that preprocessing and augmentation of the training data are not performed.
5.2 Age and gender classification
Gil and Tal proposed a CNN for age/gender classification [26]. In these experiments, the hyperparameters of this CNN are optimized by the five methods. This network consists of three convolution layers and two fully connected layers, receives an image, and outputs a gender label or an age group label. We use the implementation available on the project’s web page [27]. We test DNNs using the Adience DB [28] for the age/gender classification benchmark used in the literature [26] (Fig. 8). We divide the dataset into five sets, train the network with four sets, and test it with one set. Note that these processes require significant calculation time; thus, in the optimization process, cross validations are not performed. We perform cross validation for only the optimal solution among the optimal solutions of all methods and calculate the crossvalidated accuracy for comparison with results in the literature [26].
Tables 8, 9, and 10 show the details of each network, the fixed hyperparameters, the optimized hyperparameters, and the search ranges. Note that, for this experiment, data augmentation is conducted in a singlecrop manner [26].
5.3 Results
The experiments are executed for one month using 32 modern GPUs. The experimental results are given in Tables 11, 12, 13, and 14. In all experiments, the NelderMead method achieves both minimal loss and variance. The small variance suggests that the initial values of the method do not significantly affect the results. Furthermore, the accuracy of the cross validation with the best solution found by the NelderMead method in gender classification is 87.20% (±1.328024) and that for age classification is 51.25% (±5.461970). These values are higher than previous stateoftheart results (86.8% (±1.4) and 50.7% (±5.1)) reported in the literature [26]. The stability and search performance of this method are magnificent.
The coordinatesearch method also achieves good results with LeNet and BatchNormalized Maxout Network in Network. However, the coordinatesearch method searches points using each vector of the positive basis; thus, convergence speed is reduced as the number of dimensions increases. This appears to be the reason why the coordinatesearch method does not work for the age/gender classification CNN, which has more hyperparameters. Thus, we should use the NelderMead method rather than the coordinatesearch method. As demonstrated in the literature [9], CMAES is superior to random search because it finds better parameters earlier. Despite using the same hyperparameters for Bayesian optimization, the method works well for age estimation but not for other tasks. This indicates that, for Bayesian optimization, hyperparameters should be adjusted carefully depending on the given task.
The mean loss graphs (Figs. 9, 10, 11 and 12) show that the NelderMead method rapidly finds a good solution and converges faster than the other methods. We anticipate that the objective function for hyperparameter optimization is multimodal, and many local optima that achieve similar results exist. We confirmed this property via additional experiments that optimized the hyperparameters of the gender classification CNN, the network which has the largest search space, using the NelderMead method. The optimized hyperparameter settings after 600 evaluations of each experiment are shown using the parallel coordinates plot in Fig. 13. In the figure, points in the search space are represented as polylines with vertices on parallel axes. The position of the vertex on the ith axis corresponds to the value of the hyperparameter x _{ i }. The polylines exhibiting small losses are shown in dark colors.
Experimental results showed that the NelderMead method converged to different points every time and the objective function was almost multimodal. Different hyperparameters settings achieved similar losses. From Table 13 and Fig. 13, we deduce that many local optima that achieve similar results exist. In such cases, the NelderMead method tends to directly converge to a close local optimum without being influenced by the objective function values of distant points. In contrast, other methods perform a global search, e.g., Bayesian optimization and CMAES try to find potential candidates of global optima and require more iterations to find a local optimum in comparison to the NelderMead method.
According to the poor hyperparameter setting detection rates (Tables 15, 16, 17, and 18), on average, approximately 8, 1, 33, and 26% of executions in each experiment are detected as having poor hyperparameter settings and optimization is accelerated in proportion to the detection rate. In particular, the CNN for age/gender classification tends to be very sensitive to hyperparameter settings.
Note that the NelderMead method rarely generates poor hyperparameter settings because of its strategy, e.g., reflection moves the simplex in a direction away from the point of poor hyperparameter settings.
From the above results, we conclude that the NelderMead method is the best choice for DNN hyperparameter optimization.
6 Conclusions
In this study, we tested methods for DNN hyperparameter optimization. We showed that the NelderMead method achieved good results in all experiments. Moreover, we achieved stateoftheart accuracy with age/gender classification using the Adience DB by optimizing the CNN hyperparameters proposed in [26].
Complicated hyperparameter optimization methods are difficult to implement and have sensitive hyperparameters, which affects their performance. Therefore, it is difficult for nonexperts to use these methods. In contrast, the NelderMead method is easy to use and outperforms such complicated methods in many cases.
In our experiments, we optimized the hyperparameters of DNNs for character recognition and age/gender classification. These tasks are important and have been well known for a long time. However, it is desirable to evaluate the proposed method using the generic object recognition data set. Therefore, in future, we plan to evaluate the proposed method using other data sets. A detailed analysis of the dependency on initial parameters and the optimization of categorical variables will be also the focus of future work.
References
LeCun Y, Bottou L, Bengio Y, Patrick H (1998) Gradientbased learning applied to document recognition. Proc IEEE 86(11):2278–2324.
Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. Adv Neural Inf Process Syst (NIPS) 25:1097–1105.
Christian S, Wei L, Yangqing J, Pierre S, Scott R, Dragomir A, Dumitru E, Vincent V, Andrew R (2015) Going deeper with convolutions. Comput Vis Pattern Recognit (CVPR):1–9. http://ieeexplore.ieee.org/document/7298594/.
Bergstra J, Bengio Y (2012) Random search for hyperparameter optimization. J Mach Learn Res 13:281–305.
Mockus J (1974) On Bayesian Methods for Seeking the Extremum In: Optimization Techniques IFIP Technical Conference, 400–404.
Hansen N, Ostermeier A (2001) Completely derandomized selfadaptation in evolution strategies. Evol Comput 9:159–195.
Bergstra J, Bardenet R, Bengio Y, Kégl B (2011) Algorithms for hyperparameter optimization. Adv Neural Inf Process Syst (NIPS) 24:2546–2554.
Snoek J, Larochelle H, Adams RP (2012) Practical Bayesian optimization of machine learning algorithms. Adv Neural Inf Process Syst (NIPS) 25:2951–2959.
Watanabe S, Le Roux J (2014) A black box optimization for automatic speech recognition In: International Conference on Acoustics, Speech, and Signal Processing, 3256–3260.. (ICASSP).
Loshchilov I, Hutter F (2016) CMAES for hyperparameter optimization of deep neural networks. https://arxiv.org/abs/1604.07269. Accessed 20 Sept 2017.
Brochu E, Cora VM, De Freitas N (2010) A tutorial on Bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning. https://arxiv.org/abs/1012.2599. Accessed 20 Sept 2017.
Snoek J, Rippel O, Swersky K, Kiros R, Satish N, Sundaram N, Patwary M, Prabhat M, Adams R (2015) Scalable Bayesian optimization using deep neural networks In: International Conference on Machine Learning, 2171–2180.. (ICML).
Hansen N, Auger A, Ros R, Finck S, Pošík P (2010) Comparing results of 31 algorithms from the blackbox optimization benchmarking BBOB2009 In: Proceedings of the 12th Annual Conference Companion on Genetic and Evolutionary Computation, 1689–1696.
Conn AR, Scheinberg K, Vicente LN (2009) Introduction to derivativefree optimization. MPSSIAM Ser Optim. http://epubs.siam.org/doi/book/10.1137/1.9780898718768.
Nelder JA, Mead RA (1965) Simplex method for function minimization. Comput J 7:308–313.
Gilles C, Patrick R, Mélanie H (2005) Model selection for support vector classifiers via direct simplex search. The Florida Artificial Intelligence Research Society (FLAIRS) Conference:431–435.
Domhan T, Springenberg JT, Hutter F (2015) Speeding up automatic hyperparameter optimization of deep neural networks by extrapolation of learning curves In: Proceedings of the 24th International Joint Conference on Artificial Intelligence, 3460–3468.. (IJCAI).
Klein A, Falkner S, Springenberg JT, Hutter F (2016) Bayesian neural networks for predicting learning curves. Workshop on Bayesian Deep Learning, NIPS. http://bayesiandeeplearning.org/2016/papers/BDL_38.pdf.
Klein A, Falkner S, Springenberg JT, Hutter F (2017) Learning curve prediction with bayesian neural networks. International Conference on Learning Representations (ICLR). http://www.iclr.cc/doku.php?id=iclr2017:conference_posters#tuesday_morning.
De Rainville FM, Fortin FA, Gardner MA, Parizeau M, Gagné C (2012) “DEAP: A Python Framework for Evolutionary Algorithms” In: EvoSoft Workshop, Companion proc. of the Genetic and Evolutionary Computation Conference (GECCO). https://dl.acm.org/citation.cfm?id=2330799.
LeCun Y, Cortes C (2010) MNIST handwritten digit database. http://yann.lecun.com/exdb/mnist/. Accessed 20 Sept 2017.
Yangqing J, Evan S, Jeff D, Sergey K, Jonathan L, Ross G, Sergio G, Trevor D (2014) Caffe: Convolutional architecture for fast feature embedding. https://arxiv.org/abs/1408.5093. Accessed 20 Sept 2017.
Evan S (2014) Training LeNet on MNIST with Caffe. http://caffe.berkeleyvision.org/gathered/examples/mnist.html. Accessed 20 Sept 2017.
Nair V, Hinton GE (2010) Rectified linear units improve restricted Boltzmann machines. International Conference on Machine Learning. https://dl.acm.org/citation.cfm?id=3104425.
Chang JR, Chen YS (2015) BatchNormalized Maxout Network in Network In: Proceedings of the 33rd International Conference on Machine Learning. https://arxiv.org/abs/1511.02583. Accessed 20 Sept 2017.
Gil L, Tal H (2015) Age and gender classification using convolutional neural networks. Computer Vision and Pattern Recognition Workshops (CVPRW). http://ieeexplore.ieee.org/document/7301352/.
Gil L, Tal H (2015) Age and gender classification using convolutional neural networks. http://www.openu.ac.il/home/hassner/projects/cnn_agegender. Accessed 20 Sept 2017.
Eran E, Roee E, Tal E (2014) Age and gender estimation of unfiltered faces. IEEE Trans Inf Forensic Secur 9(12):2170–2179.
Yangjin J (2013) The learning rate decay policy. https://github.com/BVLC/caffe/blob/master/src/caffe/proto/caffe.proto. Accessed 20 Sept 2017.
Acknowledgements
This paper is based on results obtained from a project commissioned by the New Energy and Industrial Technology Development Organization (NEDO).
Authors’ contributions
YO implemented the hyperparameter optimization methods for DNNs, performed the experiments, and drafted the manuscript. MY implemented the hyperparameter optimization methods for DNNs and helped perform the experiments and draft the manuscript. MO guided the work, supervised the experimental design, and helped draft the manuscript. All authors have read and approved the final manuscript.
Competing interests
The authors declare that they have no competing interests.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Ozaki, Y., Yano, M. & Onishi, M. Effective hyperparameter optimization using NelderMead method in deep learning. IPSJ T Comput Vis Appl 9, 20 (2017). https://doi.org/10.1186/s4107401700307
Received:
Published:
DOI: https://doi.org/10.1186/s4107401700307
Keywords
 Hyperparameter optimization
 NelderMead method
 Deep learning
 Convolutional neural network