6533b7dbfe1ef96bd1270d2c

RESEARCH PRODUCT

Reinforcement Learning Your Way: Agent Characterization through Policy Regularization

Charl MareeChristian Omlin

subject

FOS: Computer and information sciencesComputer Science - Machine LearningArtificial Intelligence (cs.AI)Computer Science - Artificial Intelligenceexplainable AI; multi-agent systems; deterministic policy gradientsGeneral Earth and Planetary SciencesVDP::Teknologi: 500::Informasjons- og kommunikasjonsteknologi: 550General Environmental ScienceMachine Learning (cs.LG)

description

The increased complexity of state-of-the-art reinforcement learning (RL) algorithms has resulted in an opacity that inhibits explainability and understanding. This has led to the development of several post hoc explainability methods that aim to extract information from learned policies, thus aiding explainability. These methods rely on empirical observations of the policy, and thus aim to generalize a characterization of agents’ behaviour. In this study, we have instead developed a method to imbue agents’ policies with a characteristic behaviour through regularization of their objective functions. Our method guides the agents’ behaviour during learning, which results in an intrinsic characterization; it connects the learning process with model explanation. We provide a formal argument and empirical evidence for the viability of our method. In future work, we intend to employ it to develop agents that optimize individual financial customers’ investment portfolios based on their spending personalities.

10.3390/ai3020015http://arxiv.org/abs/2201.10003