6533b85bfe1ef96bd12bbf35
RESEARCH PRODUCT
Scalability of using Restricted Boltzmann Machines for Combinatorial Optimization
Franz RothlaufJörn GrahlMalte Probstsubject
FOS: Computer and information sciencesMathematical optimizationInformation Systems and ManagementOptimization problemGeneral Computer SciencePopulationComputer Science::Neural and Evolutionary Computation0211 other engineering and technologiesBoltzmann machine02 engineering and technologyManagement Science and Operations ResearchIndustrial and Manufacturing EngineeringEvolutionary computation0202 electrical engineering electronic engineering information engineeringNeural and Evolutionary Computing (cs.NE)educationMathematicseducation.field_of_study021103 operations researchArtificial neural networkI.2.6I.2.8Computer Science - Neural and Evolutionary ComputingEstimation of distribution algorithmModeling and SimulationScalabilityCombinatorial optimization020201 artificial intelligence & image processingI.2.6; I.2.8Algorithmdescription
Abstract Estimation of Distribution Algorithms (EDAs) require flexible probability models that can be efficiently learned and sampled. Restricted Boltzmann Machines (RBMs) are generative neural networks with these desired properties. We integrate an RBM into an EDA and evaluate the performance of this system in solving combinatorial optimization problems with a single objective. We assess how the number of fitness evaluations and the CPU time scale with problem size and complexity. The results are compared to the Bayesian Optimization Algorithm (BOA), a state-of-the-art multivariate EDA, and the Dependency Tree Algorithm (DTA), which uses a simpler probability model requiring less computational effort for training the model. Although RBM–EDA requires larger population sizes and a larger number of fitness evaluations than BOA, it outperforms BOA in terms of CPU times, in particular if the problem is large or complex. This is because RBM–EDA requires less time for model building than BOA. DTA with its restricted model is a good choice for small problems but fails for larger and more difficult problems. These results highlight the potential of using generative neural networks for combinatorial optimization.
year | journal | country | edition | language |
---|---|---|---|---|
2014-11-27 |