Bibliographic Metadata
Bibliographic Metadata
- TitleA Deep Reinforcement Learning Based Model Supporting Object Familiarization
- Author
- Published
- LanguageEnglish
- Document typeConference Proceedings
- URN
- DOI
Restriction-Information
- The document is publicly available on the WWW
Links
- Social MediaShare
- ReferenceNo Reference available
- IIIF
Files
Classification
Abstract
An important ability of cognitive systems is the ability to familiarize themselves with the properties of objects and their environment as well as to develop an understanding of the consequences of their own actions on physical objects. Developing developmental approaches that allow cognitive systems to familiarize with objects in this sense via guided self-exploration is an important challenge within the field of developmental robotics.
In this paper we present a novel approach that allows cognitive systems to familiarize themselves with the properties of objects and the effects of their actions on them in a self-exploration fashion. Our approach is inspired by developmental studies that hypothesize that infants have a propensity to systematically explore the connection between their own actions and the perceptual consequences in order to support inter-modal calibration of their bodies.
We propose a reinforcement-based approach operating in a continuous state space in which the function predicting cumulated future rewards is learned via a deep Q-network. We investigate the impact of the structure of rewards, the impact of different regularization approaches as well as the impact of different exploration strategies.
Content
Stats
- The PDF-Document has been downloaded 8 times.
