■ 제목: Identifying the key actions that lead an agent to accomplish a task in model-based deep reinforcement learning
■ 연사: Luiz Felipe Vecchietti 박사(KAIST 기계기술연구소)
■ 일시: 10.14(목) 16:00
■ URL: https://kaist.zoom.us/j/88529018177?pwd=YkdsV1FKRGl5Z1g3Q1Z6amxNTTlWZz09
■ 요약: Recent advances in Artificial Intelligence (AI), especially in the area of deep reinforcement learning (RL), have been responsible for breakthrough results in robotics. For a specific type of RL, known as multigoal RL, the agent learns to achieve multiple different goals (objectives) with a goal-conditioned policy. The goal-conditioned policy is trained to effectively generalize its behavior for multiple goals. For complex goals, the agent finishes a task after completing a long sequence of actions. For example, it can take thousands of actions for an agent to solve a Rubik’s cube. Because of the long delay to receive feedback, the agent is not able to recognize key actions that were the important ones leading to success. In this talk, we will present a recent change in paradigm from model-free to model-based techniques in deep RL. The agent not only tries to train a policy to maximize future rewards received but also tries to infer and internalize information about the environment, such as predicting future rewards received, learning the environment dynamics, and understanding its own learning capabilities. We present our results proposing a rewards prediction model to detect the most important action contributing to the accomplishment of a task by an agent and also discuss recent results obtained in different environments.