The current hype around Artificial Intelligence (AI) does not seem to want a pause, but some reality check is due. Most of the time we are just talking about sophisticated computer software only. Intelligence, at least its broader meaning and understanding is something current computing machines cannot really accomplish. This invited a turning to the philosophical question of the nature of computation and its relationship with that broader meaning of intelligence. And we also know that there isn’t just one intelligence or cognitive ability, but an identifiable mix of different abilities, of which we mostly manage to integrate, somewhat forcefully, into a whole some call general intelligence. This begs the question: are computations similar processes as cognitive processes we easily recognize of an aware, sentient intelligent agent?
The field of Robotics is one of the most interesting developments within AI. And every development around robotics might get us closer to that elusive goal of achieving computing machines able to perform tasks and do things as if it were an agent endowed with general intelligence. Recently I watched the video below that The Information Age today post. It is a presentation about a paper by three authors, of which the presenter is one – Jean-Baptiste Mouret. The paper, with the abstract and brief comment and concluding remarks you can find in this post after the video of the talk, is about robots and their adaptive skills. The authors claim that complex computing machines (complex robots) can have or learn (develop) adaptive capabilities by their own, allowing these machines to overcome limitations of brute force computing machines, with their costly data requirements when navigating complex tasks or environments; all done with a clever representation algorithm able to inform the robot about high-performance behaviors he will need to anticipate in order to successfully complete a task. Human intelligence is still vastly superior in this trade-off of cost-to-performance tasks while immersed in a complex environment.
The author’s argument rests on the claim that animals (humans or otherwise…) normally develop intuitions about their prior encounters or events within the complex environment, that involve a damage to their physical integrity, and use those intuitions to guide an improved behavior when interesting again with the same environment. The interplay of animal cognition under stressful conditions might help to build more robust and autonomous robots, and in the process further enhance understanding of what really is the relationship between computations and intelligent behavior and their more intricate processes.
This research effort was motivated by the trend of complex robot machines to migrate from closed controlled environments, like within a manufacturing facility, to the wider outside open and uncontrolled wider space. In this shift it is immediately obvious that the machines might need to use novel algorithmic setups, where the trial and error types gain relevance compared with optimization and control algorithms. Indeed trial-and-error algorithmic setup seems to be much more compatible with animal cognition than the optimization one, and are also the underpinning of current understanding of evolutionary approaches to learning and adaptation involved in animal (and human) cognition.
Talk: Robots that can adapt like animals
The paper: Robots that can adapt like animals
As robots leave the controlled environments of factories to autonomously function in more complex, natural environments (1,2,3), they will have to respond to the inevitable fact that they will become damaged (4,5). However, while animals can quickly adapt to a wide variety of injuries, current robots cannot “think outside the box” to find a compensatory behavior when damaged: they are limited to their pre-specified self-sensing abilities, can diagnose only anticipated failure modes (6) , and require a pre-programmed contingency plan for every type of potential damage, an impracticality for complex robots (4,5). Here we introduce an intelligent trial and error algorithm that allows robots to adapt to damage in less than two minutes, without requiring self-diagnosis or pre-specified contingency plans. Before deployment, a robot exploits a novel algorithm to create a detailed map of the space of high-performing behaviors: This map represents the robot’s intuitions about what behaviors it can perform and their value. If the robot is damaged, it uses these intuitions to guide a trial-and-error learning algorithm that conducts intelligent experiments to rapidly discover a compensatory behavior that works in spite of the damage. Experiments reveal successful adaptations for a legged robot injured in five different ways, including damaged, broken, and missing legs, and for a robotic arm with joints broken in 14 different ways. This new technique will enable more robust, effective, autonomous robots, and suggests principles that animals may use to adapt to injury.
Here, we show that rapid adaptation can be achieved by guiding an intelligent trial-and-error learning algorithm with an automatically generated, pre-computed, behavior-performance map that predicts the performance of thousands of different behaviors (Supplementary Video S1). The key insight is that, whereas current learning algorithms either start with no knowledge of the search space17 or with minimal knowledge from a few human demonstrations17,18, animals better understand the space of possible behaviors and their value from previous experience19, enabling injured animals to intelligently select tests that validate or invalidate whole families of promising compensatory behaviors.
One key aspect of this algorithmic approach is the inverse relation of high-performance in behavior and reducing a high-dimensional search space to a low-dimensional search space, which is quite intuitive from our cognition (and of animals…), but quite remarkable that non-living computing machines are able to achieve.
The paper is divided in two different sections. The first is an overall presentation of the proposal with the conceptual framework and the assumptions made. The second section details the methods and the experimental se up used to check for the validity of the assumptions and the presentation of the final results. Both also featured two different concluding remarks, which I briefly sketch now (please also check the figure with the pseudo-code of the algorithm proposed):
First section remark:
An additional parallel is that Intelligent Trial and Error primes the robot for creativity during a motionless period, after which the generated ideas are tested. This process is reminiscent of the finding that some animals start the day with new ideas that they may quickly disregard after experimenting with them (27) , and more generally, that sleep improves creativity on cognitive tasks (28). A final parallel is that the simulator and Gaussian process components of Intelligent Trial and Error are two forms of predictive models, which are known to exist in animals (29,12). All told, we have shown that combining pieces of nature’s algorithm, even if differently assembled, moves robots more towards animals by endowing them with the ability to rapidly adapt to unforeseen circumstances.
Second section remarks:
(…) These experiments show that the selection of the behavioral dimensions is not critical to get good results. Indeed, all tested behavioral descriptors, even those randomly generated, perform well (median > 0.20 m/s after 17 trials). On the other hand, if the robot’s designers have some prior knowledge about which dimensions of variation are likely to reveal different types of behaviors, the algorithm can benefit from this knowledge to further improve results (as with the duty factor descriptor). (…)
featured image: Extended Data Figure 1 | An overview of the Intelligent Trial and Error Algorithm.