6533b7cffe1ef96bd125983d

RESEARCH PRODUCT

Multilayer neural networks: an experimental evaluation of on-line training methods

Rafael MartíAbdellah El-fallahi

subject

Training setGeneral Computer ScienceArtificial neural networkbusiness.industryComputer scienceComputer Science::Neural and Evolutionary ComputationMathematicsofComputing_NUMERICALANALYSISContext (language use)Management Science and Operations ResearchMachine learningcomputer.software_genreBackpropagationTabu searchModeling and SimulationConjugate gradient methodGenetic algorithmSimulated annealingArtificial intelligencebusinessGradient descentcomputerMetaheuristic

description

Artificial neural networks (ANN) are inspired by the structure of biological neural networks and their ability to integrate knowledge and learning. In ANN training, the objective is to minimize the error over the training set. The most popular method for training these networks is back propagation, a gradient descent technique. Other non-linear optimization methods such as conjugate directions set or conjugate gradient have also been used for this purpose. Recently, metaheuristics such as simulated annealing, genetic algorithms or tabu search have been also adapted to this context.There are situations in which the necessary training data are being generated in real time and, an extensive training is not possible. This "on-line" training arises in the context of optimizing a simulation. This paper presents extensive computational experiments to compare 12 "on-line" training methods over a collection of 45 functions from the literature within a short-term horizon. We propose a new method based, on the tabu search methodology, which can compete in quality with the best previous approaches.

https://doi.org/10.1016/s0305-0548(03)00104-7