The Developing iCub

The videos on this page show highlights from our work modelling infant development on the iCub robot. Videos are shown in reverse order, with more recent work shown first.

The IEEE ICDL-EpiRob 2015 “Babybot challenge”

iCub was used to reproduce the longitudinal study of early reaching in infants between the ages of 1 and 19 weeks, as part of the Babybot challenge at the IEEE International Conference on Development and Learning and on Epigenetic Robotics 2015.  The study of the challenge was based on the work Claus von Hofsten presented in 1984, a major finding of which was that the number of successful reaches in babies declined at week 7 and increased again just after, forming a U-shape activity curve.

By utilising the MoDeL's architecture and system and by carefully designing experiments based on the findings and constraints reported in the infant development literature, the iCub robot was able to experience an artificial staged development similar to the infants in the initial study.

The presented study was awarded the second place in the challenge as iCub was shown to achieve similar results in terms of successful reaches and allowed conclusions related to the importance of mapping early sensory-motor experience and repetitive interactions during infancy to be drawn.

Video showing the progress iCub makes during the longitudinal study:

Time-lapse learning of hand-eye-torso coordination

This video shows a sequence of stages of development occurring in one sitting, and demonstrates the speed at which real-time learning can be performed.  The robot is initially placed under a series of constraints, preventing joint movement, that are released as learning progresses. These constraints generate a sequence of learning (eye-head-arm-torso) that reflect stages in human infant development. Initial arm control is learnt in simulation, and is quickly refined on the real robot during this session. 

Learning on the robot takes just over half an hour, and the reaching is bootstrapped with approximately 30 minutes of real-time learning in simulation.  The result is a robot that can learn to gaze and reach to objects, even those initially just out of reach, from scratch in around one hour. The video ends with a demonstration of the robot using the learnt actions to reach to an object at several locations. The footage in this second section is played at 4 times normal speed.

Simulated reach learning

This video shows part of the iCub hand-eye coordination that is learnt in simulation.  The algorithm demonstrated here is being used to investigate how infants might learn to control the movements of their arms and hands. The video shows the later stages of arm control learning, where vision and motor babbling are being used to discover how joint movement effects the movement of the hand in space.

Reaching is learnt in a series of stages similar to those observed in infants. Initial learning is performed in simulation as it relies on tactile feedback from collisions with objects that would be too risky to perform with the real robot. The learnt movements are then transferred to the robot as shown in the video above.

Learning through play

This video shows the Aberystwyth iCub learning using play-like behaviour.  The robot is intrinsically motivated to explore objects and actions based on their novelty.  It experiments with behaviours it has previously learnt, trying them on different objects, and in different combinations and sequences, and stores the results in memory-like structures called 'schemas'. These schemas record how various actions can change the state of the world, and they can be used to plan a sequence of actions to reach a desired goal. The iCub can make generalisations about what it has learnt, and so reapply behaviours in new situations, but it can also learn exceptions, when things do not work as it expected.

Development of visually triggered reaching and grasping

This video shows the Aberystwyth iCub learning to perform visually triggered reaching and grasping by progressing through a series of developmental stages.  Initially the iCub can only move its eyes, whilst the rest of its motor capabilities are constrained.  As the iCub masters each stage of development a constraint is released allowing it to learn new actions. This sequence of development simplifies the learning task at each stage, and reflects the cephalocaudal direction of development shown in infancy.

iCub learning hand-eye coordination

This video shows the iCub robot learning hand-eye coordination, using our developmental algorithms, in just 1 hour. Initially the robot has no control over any of its motors, and gradually learns to control its eyes, neck, and arms, following the developmental sequence seen in infants. Once it has sufficient hand-eye coordination, we place three buttons in front of the robot.  Using a reward system, the robot learns that by pressing the buttons (rather than the surrounding space) it can trigger a set of lights.