0000000000119961
AUTHOR
Lanza F.
Can agents talk about what they are doing? A proposal with Jason and speech acts
The dream of building robots and artificial agents that are more and more capable of thinking and acting like humans is growing by the day. Various models and architectures aim to mimic human behavior. In our current research, we propose a solution to make actions and thought cycles of agents explainable by introducing inner speech into a multi-agent system. The reasons that led us to use inner speech as a self-modeling engine raised the question of what inner speech is and how it affects cognitive systems. In this proposal, we used speech act to enable a coalition of agents to exhibit inner speech capabilities to explain their behavior, but also to guide and reinforce the creation of an in…
A cognitive architecture for human-robot teaming interaction
Human-robot interaction finalized to cooperation and teamwork is a demanding research task, both under the development and the implementation point of view. In this context, cognitive architectures are a useful means for representing the cognitive perception-action cycle leading the decision-making process. In this paper, we present ongoing work on a cognitive architecture whose modules consider the possibility to represent the decision-making process starting from the observation of the environment and also of the inner world, populated by trust attitudes, emotions, capabilities and so on, and the world of the other in the environment.
Human-robot teaming: Perspective on analysis and implementation issues
Interaction in a human-robot team in a changing environment is a big challenge. Several essential aspects that deserve investigation are at the base for efficient interactions. Among them the ability to produce a self-model and to apply elements from the theory of mind. This case is much more cumbersome than just implementing a system in which the various parts have to co-operate and collaborate to achieve a common goal. In the human-robot team, some factors that cannot be known before the execution phase intervene. Our goal is to investigate how a human-human team works and replicate it on the robot by defining a new cognitive architecture which attempts to model all the involved issues. T…
A global workspace theory model for trust estimation in human-robot interaction
Successful and genuine social connections between humans are based on trust, even more when the people involved have to collaborate to reach a shared goal. With the advent of new findings and technologies in the field of robotics, it appears that this same key factor that regulates relationships between humans also applies with the same importance to human-robot interactions (HRI). Previous studies have proven the usefulness of a robot able to estimate the trustworthiness of its human collaborators and in this position paper we discuss a method to extend an existing state-of-the-art trust model with considerations based on social cues such as emotions. The proposed model follows the Global …
Inside the robot’s mind during human-robot interaction
Humans and robots collaborating and cooperating for pursuing a shared objective need to rely on the other for carrying out an effective decision process and for updating knowledge when necessary in a dynamic environment. Robots have to behave as they were human teammates. To model the cognitive process of robots during the interaction, we developed a cognitive architecture that we implemented employing the BDI (belief, desire, intention) agent paradigm. In this paper, we focus on how to let the robot show to the human its reasoning process and how its knowledge on the work environment grows. We realized a framework whose heart is a simulator that serves the human as a window on the robot’s …
Endowing robots with self-modeling abilities for trustful human-robot interactions
Robots involved in collaborative and cooperative tasks with humans cannot be programmed in all their functions. They are autonomous entities acting in a dynamic and often partially known environment. How to interact with the humans and the decision process are determined by the knowledge on the environment, on the other and on itself. Also, the level of trust that each member of the team places in the other is crucial to creating a fruitful collaborative relationship. We hypothesize that one of the main components of a trustful relationship resides in the self-modeling abilities of the robot. The paper illustrates how employing the model of trust by Falcone and Castelfranchi to include self…