6533b826fe1ef96bd128502b

RESEARCH PRODUCT

false

subject

0301 basic medicinebusiness.industryComputer sciencemedia_common.quotation_subjectRoboticsCognitive architectureHuman–robot interactionComputer Science Applications03 medical and health sciences030104 developmental biology0302 clinical medicineArtificial IntelligenceHuman–computer interactionPerceptionSelf-awarenessRobotMeaning (existential)Artificial intelligenceAffordancebusiness030217 neurology & neurosurgerymedia_common

description

Despite major progress in Robotics and AI, robots are still basically "zombies" repeatedly achieving actions and tasks without understanding what they are doing. Deep-Learning AI programs classify tremendous amounts of data without grasping the meaning of their inputs or outputs. We still lack a genuine theory of the underlying principles and methods that would enable robots to understand their environment, to be cognizant of what they do, to take appropriate and timely initiatives, to learn from their own experience and to show that they know that they have learned and how. The rationale of this paper is that the understanding of its environment by an agent (the agent itself and its effects on the environment included) requires its self-awareness, which actually is itself emerging as a result of this understanding and the distinction that the agent is capable to make between its own mind-body and its environment. The paper develops along five issues: agent perception and interaction with the environment; learning actions; agent interaction with other agents - specifically humans; decision-making; and the cognitive architecture integrating these capacities.