6533b7d2fe1ef96bd125f450

RESEARCH PRODUCT

Path relinking and GRG for artificial neural networks

Leon S. LasdonRafael MartíAbdellah El-fallahi

subject

Mathematical optimizationInformation Systems and ManagementTraining setGeneral Computer ScienceArtificial neural networkComputer sciencebusiness.industryManagement Science and Operations ResearchSolverIndustrial and Manufacturing EngineeringBackpropagationEvolutionary computationTabu searchNonlinear programmingSearch algorithmModeling and SimulationArtificial intelligencebusinessMetaheuristic

description

Artificial neural networks (ANN) have been widely used for both classification and prediction. This paper is focused on the prediction problem in which an unknown function is approximated. ANNs can be viewed as models of real systems, built by tuning parameters known as weights. In training the net, the problem is to find the weights that optimize its performance (i.e., to minimize the error over the training set). Although the most popular method for training these networks is back propagation, other optimization methods such as tabu search or scatter search have been successfully applied to solve this problem. In this paper we propose a path relinking implementation to solve the neural network training problem. Our method uses GRG, a gradient-based local NLP solver, as an improvement phase, while previous approaches used simpler local optimizers. The experimentation shows that the proposed procedure can compete with the best-known algorithms in terms of solution quality, consuming a reasonable computational effort.

https://doi.org/10.1016/j.ejor.2004.08.012