Search results for "Reinforcement"

showing 10 items of 230 documents

Brain Activity Characterization Induced by Alcoholic Addiction: Spectral and Causality Analysis of Brain Areas Related to Control and Reinforcement o…

2014

Addiction to drugs generates modifications in the brain structure and its functions. In this work, an experimental model is described, using rats to characterize the brain activity induced by alcohol addiction. Four records were obtained using electrodes located in brain areas related to impulsivity control and reinforcement, i.e. the prelimbic (PL) and infralimbic (IL) cortex, together with the hippocampus (HPC). In the records, three main events related to the drinking action were selected: in the previous minute (T1), the first minute while drinking (T2) and the first minute after stopping drinking (T3).

Brain activity and meditationAddictionmedia_common.quotation_subjectHippocampusLocal field potentialImpulsivityCausalitymedicine.anatomical_structureCortex (anatomy)medicinemedicine.symptomReinforcementPsychologyNeurosciencemedia_common
researchProduct

Cerebellar learning of bio-mechanical functions of extra-ocular muscles: modeling by artificial neural networks

2003

A control circuit is proposed to model the command of saccadic eye movements. Its wiring is deduced from a mathematical constraint, i.e. the necessity, for motor orders processing, to compute an approximate inverse function of the bio-mechanical function of the moving plant, here the bio-mechanics of the eye. This wiring is comparable to the anatomy of the cerebellar pathways. A predicting element, necessary for inversion and thus for movement accuracy, is modeled by an artificial neural network whose structure, deduced from physical constraints expressing the mechanics of the eye, is similar to the cell connectivity of the cerebellar cortex. Its functioning is set by supervised reinforceme…

CerebellumEye MovementsArtificial neural networkbusiness.industryGeneral NeuroscienceMotor controlEye movementPattern recognitionSaccadic maskingBiomechanical Phenomenamedicine.anatomical_structureOculomotor MusclesCerebellumCerebellar cortexMotor systemmedicineLearningReinforcement learningNeural Networks ComputerArtificial intelligencebusinessNeuroscienceMathematicsNeuroscience
researchProduct

2021

In humans and mammals, effort-based decision-making for monetary or food rewards paradigms contributes to the study of adaptive goal-directed behaviours acquired through reinforcement learning. Chronic distress modelled by repeated exposure to glucocorticoids in rodents induces suboptimal decision-making under uncertainty by impinging on instrumental acquisition and prompting negative valence behaviours. In order to further disentangle the motivational tenets of adaptive decision-making, this study addressed the consequences of enduring distress on relevant effort and reward-processing dimensions. Experimentally, appetitive and consummatory components of motivation were evaluated in adult C…

Cognitive NeuroscienceNoveltyInsular cortexBehavioral NeuroscienceDistresschemistry.chemical_compoundNeuropsychology and Physiological Psychologymedicine.anatomical_structurechemistryCorticosteronemedicineValence (psychology)ReinforcementPsychologyNeuroscienceFOSBBasolateral amygdalaFrontiers in Behavioral Neuroscience
researchProduct

Chronic Distress in Male Mice Impairs Motivation Compromising Both Effort and Reward Processing With Altered Anterior Insular Cortex and Basolateral …

2021

AbstractIn humans and mammals, effort-based decision-making for monetary or food rewards paradigms contribute to the study of adaptive goal-directed behaviours acquired through reinforcement learning. Chronic distress modelled by repeated exposure to glucocorticoids in rodents induces suboptimal decision-making under uncertainty by impinging on instrumental acquisition and prompting negative valence behaviours. In order to further disentangle the motivational tenets of adaptive decision-making, this study addressed the consequences of enduring distress on relevant effort and reward processing dimensions. Experimentally, appetitive and consummatory components of motivation were evaluated in …

Cognitive NeuroscienceeffortInsular cortexBehavioral Neurosciencechemistry.chemical_compoundmotivationCorticosteronemedicineValence (psychology)Reinforcementreward processingOriginal ResearchglucocorticoidsNoveltyDistressNeuropsychology and Physiological Psychologymedicine.anatomical_structurechemistryinsular cortexchronic distressPsychologyNeurosciencebasolateral amygdalaFOSBBasolateral amygdalaNeuroscienceFrontiers in behavioral neuroscience
researchProduct

Emergent Collective Behaviors in a Multi-agent Reinforcement Learning Pedestrian Simulation: A Case Study

2015

In this work, a Multi-agent Reinforcement Learning framework is used to generate simulations of virtual pedestrians groups. The aim is to study the influence of two different learning approaches in the quality of generated simulations. The case of study consists on the simulation of the crossing of two groups of embodied virtual agents inside a narrow corridor. This scenario is a classic experiment inside the pedestrian modeling area, because a collective behavior, specifically the lanes formation, emerges with real pedestrians. The paper studies the influence of different learning algorithms, function approximation approaches, and knowledge transfer mechanisms on performance of learned ped…

Collective behaviorFunction approximationbusiness.industryComputer scienceBellman equationVector quantizationProbabilistic logicReinforcement learningArtificial intelligencebusinessTransfer of learningKnowledge transferSimulation
researchProduct

GENERALITY OF GRAPHIC VARIABLES ACROSS DRAWING TASKS

1968

Konttinen, R. & Olkinuora, E. Generality of graphic variables across drawing tasks. Scund. J. Psychol., 1968, 9, 161–168.—As a partial replication of the Takala & Rantanen (1964) study, the correlations between 6 graphic variables extracted from 6 drawing tasks differing in complexity were investigated. Four graphic trait factors were obtained I Size, II Pressure, III Discontinuous lines, and IV Angularity. Other graphic variables (nuancity and reinforcement) loaded factors II and III. The data lend support to the hypothesis that the same graphic traits should be interpreted in the same way irrespective of the complexity of the test. However, the complexity of the drawing task may make a di…

CommunicationGeneralityInterpretation (logic)business.industryGeneral MedicineTest (assessment)Task (project management)Arts and Humanities (miscellaneous)Drawing TasksDevelopmental and Educational PsychologyTraitPartial replicationReinforcementPsychologybusinessGeneral PsychologyCognitive psychologyScandinavian Journal of Psychology
researchProduct

Experimental investigation on the effectiveness of basalt-fibre strengthening systems for confining masonry elements

L’impiego di materiali compositi per il rinforzo di colonne in muratura è diventato una pratica ampiamente diffusa nel corso degli ultimi decenni. Questa tecnica, che consiste generalmente nell’applicazione di materiali polimerici fibrorinforzati (Fibre Reinforced Polymer-FRP), ha mostrato buone potenzialità, essendo in grado di garantire notevoli incrementi di resistenza e duttilità dell’elemento rinforzato, grazie ad un’azione di confinamento passivo. Tuttavia, l’impiego di compositi a matrice polimerica presenta alcuni limiti legati soprattutto alle prestazioni delle resine epossidiche, che a causa della loro natura sintetica danno luogo a problemi di compatibilità con il supporto murari…

Composite materialStrengthening and repairCompression testFibre Reinforced Cementitious Matrix (FRCM)Fibre Reinforced Polymer (FRP)Experimental investigationSettore ICAR/09 - Tecnica Delle CostruzioniReinforcement ratioDigital Image Correlation (DIC)Basalt textile gridMasonry columnBFRPTensile testBFRCMBasalt fibreConfinement
researchProduct

Learning formulae from elementary facts

1997

Since the seminal paper by E.M. Gold [Gol67] the computational learning theory community has been presuming that the main problem in the learning theory on the recursion-theoretical level is to restore a grammar from samples of language or a program from its sample computations. However scientists in physics and biology have become accustomed to looking for interesting assertions rather than for a universal theory explaining everything.

Computational learning theoryGrammarSample exclusion dimensionmedia_common.quotation_subjectAlgorithmic learning theoryMathematics educationLearning theoryReinforcement learningSample (statistics)Inductive reasoningmedia_commonMathematics
researchProduct

Calibrating a Motion Model Based on Reinforcement Learning for Pedestrian Simulation

2012

In this paper, the calibration of a framework based in Multi-agent Reinforcement Learning (RL) for generating motion simulations of pedestrian groups is presented. The framework sets a group of autonomous embodied agents that learn to control individually its instant velocity vector in scenarios with collisions and friction forces. The result of the process is a different learned motion controller for each agent. The calibration of both, the physical properties involved in the motion of our embodied agents and the corresponding dynamics, is an important issue for a realistic simulation. The physics engine used has been calibrated with values taken from real pedestrian dynamics. Two experime…

Computer Science::Multiagent SystemsComputer scienceDynamics (mechanics)DiagramComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISIONCalibrationProcess (computing)Reinforcement learningMotion controllerPhysics engineSimulationMotion (physics)
researchProduct

Multi-agent Reinforcement Learning for Simulating Pedestrian Navigation

2012

In this paper we introduce a Multi-agent system that uses Reinforcement Learning (RL) techniques to learn local navigational behaviors to simulate virtual pedestrian groups. The aim of the paper is to study empirically the validity of RL to learn agent-based navigation controllers and their transfer capabilities when they are used in simulation environments with a higher number of agents than in the learned scenario. Two RL algorithms which use Vector Quantization (VQ) as the generalization method for the space state are presented. Both strategies are focused on obtaining a good vector quantizier that generalizes adequately the state space of the agents. We empirically state the convergence…

Computer scienceGeneralizationbusiness.industryVector quantizationContext (language use)Machine learningcomputer.software_genreDomain (software engineering)Convergence (routing)State spaceReinforcement learningArtificial intelligenceTransfer of learningbusinesscomputer
researchProduct