From Low Level Motor Control to High Level Interaction Skills

Authors

  • David Bailly
  • Pierre Andry
  • Philippe Gaussier

DOI:

https://doi.org/10.2390/biecoll-robotdoc2012-02

Keywords:

DDC: 004 (Data processing, computer science, computer systems)

Abstract

The goal of this research is to create a non-verbal system able to interact safely and naturally with humans. The main hypothesis is that mechanisms of high level interactions such as cooperation and understanding intentions can be obtained from well designed low-level systems. For example, an effector device instrumented to detect force constraints applied by others allows to get easily the direction (opposing vs facilitating) and, at a higher level of interpretation, the intention of others concerning the device's movement. This is one of the reasons we preferred hydraulic technology which presents a potential of physical compliance. Moreover, pressure control in the pistons is closer to muscles control than the electric motors. For the control architecture, we are interested in modeling the layers of motor command : low level force control, multimodal inputs (especially vision) leading to prediction and anticipation capabilities. To do so, this research includes the design of a bio-inspired neural network able to provide a force control of the hardware and merging inputs from different kind of sensors including vision and proprioception. The control has to be as close as possible to the hardware with the less layer possible. It is based on a control by activation of agonist and antagonist muscles. The position and torque sensor as well as short range proximity sensor are used to learn simple movements and their sensory outcome. The vision is also available through robotic eye mounted on a fast pan-tilt system allowing movement at human speed. High definition camera gives a video flow that can be used to analyze the scene. The neural network designed allows the system to analyze the scene using point of interest. By extracting local features around those points it is possible to construct a library of visual feature. Using this library objects can be recognize by learning simple associations between those local feature and sensorial context including supervision signals. Action can then be associated with the context or the presence of an object. Moreover sequences of simple actions can be learned through cognitive maps. For example the robot can learn from the human teacher to grasp, move and release an object. From then and with the recognition of object the robot is able to learn tasks such as sorting objects using their visual characteristic. As we construct this controller we hope to improve our knowledge of some structures of the brain such as the motor cortex, the pre-frontal cortex, the striatum or the cerebellum. Models of all these structures and other are used in the model here developed. The researches aim especially to better understand the influence of each structure on the global behavior of the robot as well as the synergies that emerge from the cooperation between structures and to create a new type of humanoid robot where all parts from the technology, through the low level control to the high level control is thought in the optic of realistic interactions with humans.

Downloads

Published

2012-12-31