6533b835fe1ef96bd129f2ca
RESEARCH PRODUCT
Calibrating a Motion Model Based on Reinforcement Learning for Pedestrian Simulation
Miguel LozanoFrancisco Martinez-gilFernando Fernándezsubject
Computer Science::Multiagent SystemsComputer scienceDynamics (mechanics)DiagramComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISIONCalibrationProcess (computing)Reinforcement learningMotion controllerPhysics engineSimulationMotion (physics)description
In this paper, the calibration of a framework based in Multi-agent Reinforcement Learning (RL) for generating motion simulations of pedestrian groups is presented. The framework sets a group of autonomous embodied agents that learn to control individually its instant velocity vector in scenarios with collisions and friction forces. The result of the process is a different learned motion controller for each agent. The calibration of both, the physical properties involved in the motion of our embodied agents and the corresponding dynamics, is an important issue for a realistic simulation. The physics engine used has been calibrated with values taken from real pedestrian dynamics. Two experiments have been carried out for testing this approach. The results of the experiments are compared with databases of real pedestrians in similar scenarios. As a comparison tool, the diagram of speed versus density, known as fundamental diagram in the literature, is used.
year | journal | country | edition | language |
---|---|---|---|---|
2012-01-01 |