Search results for "I.2.6"

showing 7 items of 7 documents

Expanding the Active Inference Landscape: More Intrinsic Motivations in the Perception-Action Loop

2018

Active inference is an ambitious theory that treats perception, inference and action selection of autonomous agents under the heading of a single principle. It suggests biologically plausible explanations for many cognitive phenomena, including consciousness. In active inference, action selection is driven by an objective function that evaluates possible future actions with respect to current, inferred beliefs about the world. Active inference at its core is independent from extrinsic rewards, resulting in a high level of robustness across e.g.\ different environments or agent morphologies. In the literature, paradigms that share this independence have been summarised under the notion of in…

FOS: Computer and information sciencesComputer scienceComputer Science - Artificial Intelligencepredictive informationBiomedical EngineeringInferenceSystems and Control (eess.SY)02 engineering and technologyAction selectionI.2.0; I.2.6; I.5.0; I.5.1lcsh:RC321-57103 medical and health sciences0302 clinical medicineactive inferenceArtificial IntelligenceFOS: Electrical engineering electronic engineering information engineering0202 electrical engineering electronic engineering information engineeringFormal concept analysisMethodsperception-action loopuniversal reinforcement learningintrinsic motivationlcsh:Neurosciences. Biological psychiatry. NeuropsychiatryFree energy principleCognitive scienceRobotics and AII.5.0I.5.1I.2.6Partially observable Markov decision processI.2.0Artificial Intelligence (cs.AI)Action (philosophy)empowermentIndependence (mathematical logic)free energy principleComputer Science - Systems and Control020201 artificial intelligence & image processingBiological plausibility62F15 91B06030217 neurology & neurosurgeryvariational inference
researchProduct

Probabilistic and team PFIN-type learning: General properties

2008

We consider the probability hierarchy for Popperian FINite learning and study the general properties of this hierarchy. We prove that the probability hierarchy is decidable, i.e. there exists an algorithm that receives p_1 and p_2 and answers whether PFIN-type learning with the probability of success p_1 is equivalent to PFIN-type learning with the probability of success p_2. To prove our result, we analyze the topological structure of the probability hierarchy. We prove that it is well-ordered in descending ordering and order-equivalent to ordinal epsilon_0. This shows that the structure of the hierarchy is very complicated. Using similar methods, we also prove that, for PFIN-type learning…

FOS: Computer and information sciencesComputer Science::Machine LearningTheoretical computer scienceComputer Networks and CommunicationsExistential quantificationStructure (category theory)DecidabilityType (model theory)Learning in the limitTheoretical Computer ScienceMachine Learning (cs.LG)Probability of successFinite limitsMathematicsOrdinalsDiscrete mathematicsHierarchybusiness.industryApplied MathematicsAlgorithmic learning theoryProbabilistic logicF.1.1 I.2.6Inductive inferenceInductive reasoningDecidabilityComputer Science - LearningTeam learningComputational Theory and MathematicsArtificial intelligencebusinessJournal of Computer and System Sciences
researchProduct

Denoising Autoencoders for Fast Combinatorial Black Box Optimization

2015

Estimation of Distribution Algorithms (EDAs) require flexible probability models that can be efficiently learned and sampled. Autoencoders (AE) are generative stochastic networks with these desired properties. We integrate a special type of AE, the Denoising Autoencoder (DAE), into an EDA and evaluate the performance of DAE-EDA on several combinatorial optimization problems with a single objective. We asses the number of fitness evaluations as well as the required CPU times. We compare the results to the performance to the Bayesian Optimization Algorithm (BOA) and RBM-EDA, another EDA which is based on a generative neural network which has proven competitive with BOA. For the considered pro…

FOS: Computer and information sciencesArtificial neural networkI.2.6business.industryFitness approximationComputer scienceNoise reductionI.2.8MathematicsofComputing_NUMERICALANALYSISComputer Science - Neural and Evolutionary ComputingMachine learningcomputer.software_genreAutoencoderOrders of magnitude (bit rate)Estimation of distribution algorithmBlack boxComputingMethodologies_SYMBOLICANDALGEBRAICMANIPULATIONNeural and Evolutionary Computing (cs.NE)Artificial intelligencebusinessI.2.6; I.2.8computerProceedings of the Companion Publication of the 2015 Annual Conference on Genetic and Evolutionary Computation
researchProduct

Poliaromātisku un polidentātu ligandu sintēze un to izmantošana porainu metālorganisko ietvaru iegūšanai

2018

Poliaromātisku un polidentātu ligandu sintēze un to izmantošana porainu metālorganisko ietvaru iegūšanai. Petkus J., zinātniskais vadītājs Dr. ķīm. Šubins K., konsultēja Dr. h. ķīm. Zicmanis A. Bakalaura darbs, 44 lappuses, 43 attēli, 3 tabulas, 36 literatūras avoti. Latviešu valodā. Bakalaura darbs ir veltīts jaunu heterotriangulēnu atvasinājumu iegūšanas sintēžu uzlabošanai.

POLIAROMĀTISKIE LIGANDI2610-TRIJODAIZVIETOTAIS HETEROTRIANGULĒNSSONOGAŠIRAS ŠĶĒRSSAMETINĀŠANATRIIZOPROPILSILIGRUPAS DEPROTEKCIJAĶīmija
researchProduct

Scalability of using Restricted Boltzmann Machines for Combinatorial Optimization

2014

Abstract Estimation of Distribution Algorithms (EDAs) require flexible probability models that can be efficiently learned and sampled. Restricted Boltzmann Machines (RBMs) are generative neural networks with these desired properties. We integrate an RBM into an EDA and evaluate the performance of this system in solving combinatorial optimization problems with a single objective. We assess how the number of fitness evaluations and the CPU time scale with problem size and complexity. The results are compared to the Bayesian Optimization Algorithm (BOA), a state-of-the-art multivariate EDA, and the Dependency Tree Algorithm (DTA), which uses a simpler probability model requiring less computati…

FOS: Computer and information sciencesMathematical optimizationInformation Systems and ManagementOptimization problemGeneral Computer SciencePopulationComputer Science::Neural and Evolutionary Computation0211 other engineering and technologiesBoltzmann machine02 engineering and technologyManagement Science and Operations ResearchIndustrial and Manufacturing EngineeringEvolutionary computation0202 electrical engineering electronic engineering information engineeringNeural and Evolutionary Computing (cs.NE)educationMathematicseducation.field_of_study021103 operations researchArtificial neural networkI.2.6I.2.8Computer Science - Neural and Evolutionary ComputingEstimation of distribution algorithmModeling and SimulationScalabilityCombinatorial optimization020201 artificial intelligence & image processingI.2.6; I.2.8Algorithm
researchProduct

CCDC 104280: Experimental Crystal Structure Determination

1997

Related Article: J.Ratilainen, K.Airola, M.Nieger, M.Bohme, J.Huuskonen, K.Rissanen|1997|Chem.-Eur.J.|3|749|doi:10.1002/chem.19970030515

Space GroupCrystallographyCrystal SystemCrystal StructureCell Parameters410-Di(14)benzena-35911-tetraoxa-17-di(26) pyridinadodecaphane hemikis(diethyl ether) clathrateExperimental 3D Coordinates
researchProduct

CCDC 104281: Experimental Crystal Structure Determination

1997

Related Article: J.Ratilainen, K.Airola, M.Nieger, M.Bohme, J.Huuskonen, K.Rissanen|1997|Chem.-Eur.J.|3|749|doi:10.1002/chem.19970030515

41016-Tri(14)benzena-359111517-hexaoxa-1713-tri(26)pyridinaoctadecaphane monohydrate clathrateSpace GroupCrystallographyCrystal SystemCrystal StructureCell ParametersExperimental 3D Coordinates
researchProduct