Input Addition and Deletion in Reinforcement : Towards Protean Learning Article - Avril 2022

Iago Bonnici, Abdelkader Gouaich, Fabien Michel

Iago Bonnici, Abdelkader Gouaich, Fabien Michel, « Input Addition and Deletion in Reinforcement : Towards Protean Learning  », Autonomous Agents and Multi-Agent Systems, avril 2022, #4. ISSN 1387-2532

Abstract

Reinforcement Learning (RL) agents are commonly thought of as adaptive decision procedures. They work on input/output data streams called "states", "actions" and "rewards". Most current research about RL adaptiveness to changes works under the assumption that the streams signatures (i.e. arity and types of inputs and outputs) remain the same throughout the agent lifetime. As a consequence, natural situations where the signatures vary (e.g. when new data streams become available, or when others become obsolete) are not studied. In this paper, we relax this assumption and consider that signature changes define a new learning situation called Protean Learning (PL). When they occur, traditional RL agents become undefined, so they need to restart learning. Can better methods be developed under the PL view ? To investigate this, we first construct a stream-oriented formalism to properly define PL and signature changes. Then, we run experiments in an idealized PL situation where input addition and deletion occur during the learning process. Results show that a simple PL-oriented method enables graceful adaptation of these arity changes, and is more efficient than restarting the process.

Voir la notice complète sur HAL

Actualités