Search results for "I.2.6"
showing 7 items of 7 documents
Expanding the Active Inference Landscape: More Intrinsic Motivations in the Perception-Action Loop
2018
Active inference is an ambitious theory that treats perception, inference and action selection of autonomous agents under the heading of a single principle. It suggests biologically plausible explanations for many cognitive phenomena, including consciousness. In active inference, action selection is driven by an objective function that evaluates possible future actions with respect to current, inferred beliefs about the world. Active inference at its core is independent from extrinsic rewards, resulting in a high level of robustness across e.g.\ different environments or agent morphologies. In the literature, paradigms that share this independence have been summarised under the notion of in…
Probabilistic and team PFIN-type learning: General properties
2008
We consider the probability hierarchy for Popperian FINite learning and study the general properties of this hierarchy. We prove that the probability hierarchy is decidable, i.e. there exists an algorithm that receives p_1 and p_2 and answers whether PFIN-type learning with the probability of success p_1 is equivalent to PFIN-type learning with the probability of success p_2. To prove our result, we analyze the topological structure of the probability hierarchy. We prove that it is well-ordered in descending ordering and order-equivalent to ordinal epsilon_0. This shows that the structure of the hierarchy is very complicated. Using similar methods, we also prove that, for PFIN-type learning…
Denoising Autoencoders for Fast Combinatorial Black Box Optimization
2015
Estimation of Distribution Algorithms (EDAs) require flexible probability models that can be efficiently learned and sampled. Autoencoders (AE) are generative stochastic networks with these desired properties. We integrate a special type of AE, the Denoising Autoencoder (DAE), into an EDA and evaluate the performance of DAE-EDA on several combinatorial optimization problems with a single objective. We asses the number of fitness evaluations as well as the required CPU times. We compare the results to the performance to the Bayesian Optimization Algorithm (BOA) and RBM-EDA, another EDA which is based on a generative neural network which has proven competitive with BOA. For the considered pro…
Poliaromātisku un polidentātu ligandu sintēze un to izmantošana porainu metālorganisko ietvaru iegūšanai
2018
Poliaromātisku un polidentātu ligandu sintēze un to izmantošana porainu metālorganisko ietvaru iegūšanai. Petkus J., zinātniskais vadītājs Dr. ķīm. Šubins K., konsultēja Dr. h. ķīm. Zicmanis A. Bakalaura darbs, 44 lappuses, 43 attēli, 3 tabulas, 36 literatūras avoti. Latviešu valodā. Bakalaura darbs ir veltīts jaunu heterotriangulēnu atvasinājumu iegūšanas sintēžu uzlabošanai.
Scalability of using Restricted Boltzmann Machines for Combinatorial Optimization
2014
Abstract Estimation of Distribution Algorithms (EDAs) require flexible probability models that can be efficiently learned and sampled. Restricted Boltzmann Machines (RBMs) are generative neural networks with these desired properties. We integrate an RBM into an EDA and evaluate the performance of this system in solving combinatorial optimization problems with a single objective. We assess how the number of fitness evaluations and the CPU time scale with problem size and complexity. The results are compared to the Bayesian Optimization Algorithm (BOA), a state-of-the-art multivariate EDA, and the Dependency Tree Algorithm (DTA), which uses a simpler probability model requiring less computati…
CCDC 104280: Experimental Crystal Structure Determination
1997
Related Article: J.Ratilainen, K.Airola, M.Nieger, M.Bohme, J.Huuskonen, K.Rissanen|1997|Chem.-Eur.J.|3|749|doi:10.1002/chem.19970030515
CCDC 104281: Experimental Crystal Structure Determination
1997
Related Article: J.Ratilainen, K.Airola, M.Nieger, M.Bohme, J.Huuskonen, K.Rissanen|1997|Chem.-Eur.J.|3|749|doi:10.1002/chem.19970030515