Mantisbot (6) models the neural circuits of the praying mantis for target tracking.

PHOTO: CASE WESTERN RESERVE UNIVERSITY

It is an engineer’s dream to build a robot as competent as an insect at locomotion, directed action, navigation, and survival in complex conditions. But as well as studying insects to improve robotics, in parallel, robot implementations have played a useful role in evaluating mechanistic explanations of insect behavior, testing hypotheses by embedding them in real-world machines. The wealth and depth of data coming from insect neuroscience hold the tantalizing possibility of building complete insect brain models. Robotics has a role to play in maintaining a focus on functional understanding—what do the neural circuits need to compute to support successful behavior?

Insect brains have been described as “minute structures controlling complex behaviors” (1): Compare the number of neurons in the fruit fly brain (∼135,000) to that in the mouse (70 million) or human (86 billion). Insect brain structures and circuits evolved independently to solve many of the same problems faced by vertebrate brains (or a robot’s control program). Despite the vast range of insect body types, behaviors, habitats, and lifestyles, there are many surprising consistencies across species in brain organization, suggesting that these might be effective, efficient, and general-purpose solutions.

Unraveling these circuits combines many disciplines, including painstaking neuroanatomical and neurophysiological analysis of the components and their connectivity. An important recent advance is the development of neurogenetic methods that provide precise control over the activity of individual neurons in freely behaving animals. However, the ultimate test of mechanistic understanding is the ability to build a machine that replicates the function. Computer models let researchers copy the brain’s processes, and robots allow these models to be tested in real bodies interacting with real environments (2). The following examples illustrate how this approach is being used to explore increasingly sophisticated control problems, including predictive tracking, body coordination, navigation, and learning.

The visual target tracking of dragonflies has been replicated on a (wheeled) robot platform performing active pursuit (3), giving new insight into the neural mechanisms. The starting point was neurophysiological characterization of the responses of small target motion detector (STMD) neurons in the dragonfly brain. These show a distinctive facilitation profile, that is, a slow buildup of activity to targets that move on consistent trajectories in the visual field. A computational neural model incorporating such facilitation properties was shown to improve tracking performance in the presence of clutter and distractors, even outperforming state-of-the-art computer vision algorithms (4). The implementation on the robot involved insect-like early visual processing, including resolution, spectral sensitivity, and temporal and spatial high-pass filtering such that the receptors respond most to rapid changes in the stimulus. The passage of fast-moving small objects against the background can be detected from a local rise followed by a fall (or vice versa) in intensity of receptor activation. In a retinotopic array of STMDs (as a neural map), center-surround inhibition and a winner-take-all process (suppressing all but the strongest signal) select a single target position, and its direction and rate of motion are used to facilitate the activation of model STMDs in the predicted future location. The facilitation enhances pursuit and may explain selective attention responses observed in downstream neurons (5).

When the robot makes a quick movement (a saccade) to visually pursue the target, this will change the relative position of the target in the visual field (e.g., to keep it centered). Hence the position in the neural map to which the facilitation should be propagated depends not only on the target’s motion but on the robot’s (or dragonfly’s) own motion. This means that the target pursuit system must receive some information about the motor command. In addition, the implementation on a robot demonstrated the robustness of the model to challenges such as changing illumination and unexpected motor disturbance (bumps). It also confirmed that the optimal time constant for facilitation depends on the specific circumstances (target velocity and background clutter), suggesting that STMD neurons should exhibit dynamic modulation of facilitation. The neural model on the robot thus allowed neural data that had been collected from an immobilized insect to be understood in the context of continuous behavioral control in natural conditions, predicting that further experiments should reveal inputs from motor systems and dynamic modulation of the STMD response.

Insect target tracking behavior has also been examined in the praying-mantis–inspired “mantisbot” (6). Here the focus is on how the detected position of a visual target can be translated into the complex coordination of head, body, and leg joints in a hexapod to make a successful orienting movement. The solution implemented on the robot exploits a detailed, distributed leg control network based on local reflexes [also used to model walking control for the stick insect (7)] that can be modulated by relatively simple high-level signals to alter the stepping motion toward a given target direction. The same network can also, through a simple switch, be used to control posture changes instead of walking, corresponding to the animal tracking the target with its head and body only. The tuning of the network in the mantisbot was based on (robotic) methods of inverse kinematics, in which the geometric relation of joint angles to the end-of-limb position is used to derive, inversely, for a desired end-of-limb position, the required values for joint angles. This method allowed a deterministic setting of the synaptic values in the model that would have been set by evolution in the animal.

The mantisbot controller demonstrated that descending information from the insect brain to motor circuits can be in the simple form of a desired vector of motion. Additionally, it showed that it is crucial even for simple saccades that the brain maintains a short-term memory of the position of the prey. Other insect behaviors require more sophisticated directional memory, such as the ability of ants, bees, and wasps to maintain an estimate of their home location during long and convoluted foraging excursions, by continuous integration of their velocity (path integration). The underlying neural circuitry for this advanced spatial capacity has been unraveled (8). The insect central complex (CX) receives celestial compass inputs (9) and encodes heading direction relative to visual targets and self-motion (10). Identified neurons that have the required connectivity to combine this information with the speed (estimated from the motion of the visual surrounding) could form the basis of a distributed vector memory, constantly updated to reflect the geocentric location of the animal relative to its starting point (8). Moreover, the precise and highly regular connectivity pattern between these neurons and specific output neurons of the CX provides a mechanism for steering the animal home, essentially by evaluating (before acting) whether turning left or right would most improve alignment to the target. A neural model that copies CX neuroanatomy at the single-neuron level can thus explain the path integration capability of insects (8).

This model has recently been extended with a proposal for how insects could return to a discovered food source and take efficient routes between multiple sources (11). This would require that a snapshot of the state of the vector memory could be stored for salient locations in the world and then reactivated—to interact with the same steering circuitry—when the animal wants to revisit the location. As yet, the neural basis for such a memory is unknown.

The CX model has been demonstrated to work for path integration on both wheeled and flying robots. However, the key “robotic” contribution to understanding this circuit was mostly conceptual. Taking a robotic perspective meant that, rather than focusing on how the CX neurons “represent” external stimuli, the question became, how do the neurons transform the stimuli into the control of action? For example, accumulating speed in eight directions, following the eightfold columnar structure in each half of the CX, is a redundant Cartesian encoding (using more axes than required) of the home vector. However, it greatly simplifies the subsequent calculation of the desired turning direction, allowing a simple column shift to the right or left to “rotate” the vector by 90°.

Where next? Another prominent subcircuit, found in the brains of all insects, that is coming under increasing scrutiny is the mushroom body (MB). This region is known to be involved in associative learning of the value of olfactory stimuli. Its distinctive architecture, which has been compared to that of the vertebrate cerebellum (12), has been shown in multiple modeling studies, and some robot applications (13), to support pattern learning by encoding inputs as sparse activation of a small subset of a larger neural population and correlating with a reward signal. A recent study directly evaluated the effectiveness of an augmented MB model on robot benchmark datasets for real-world place recognition (14). This work suggests that a key function of the MB is to produce an efficient and compact reencoding of stimuli (in this case, outdoor images from a moving platform over a long route) that can be exploited for recognition, even in changing conditions. The results show that the insect-inspired network produces performance comparable to that of state-of-the-art deep-learning approaches for autonomous navigation, with a much smaller and faster computational footprint (13).

However, currently modelers have not converged in their accounts of the key MB learning mechanisms. Most (but not all) focus on a change in synaptic weight between the parallel fibers of the Kenyon cells (KCs), encoding the stimulus, and the output neurons. The output neurons are sometimes interpreted as encoding the response, and sometimes the predicted stimulus value. In some models, the synaptic change depends on coincident firing of KCs and output neurons, in other models on delivery of a reward signal (or alternatively, a prediction error signal) by dopaminergic neurons that target the synapse, and some models combine both mechanisms. Moreover, there is a cornucopia of new information emerging about the precise anatomy and individual neural function of the MB, particularly for neurogenetic model systems such as the fruit fly (Drosophila melanogaster), which has yet to be incorporated in computational or robot models. For example, the MB is divided into multiple compartments in which specific reward inputs target specific output neurons, and the KCs, output, and dopaminergic neurons form distinct tripartite synapses, suggesting a more complex flow of information between them.

What about modeling the whole insect brain? Several groups, inspired by detailed D. melanogaster brain wiring diagrams, are now pursuing this target (15). But just including more detail in brain models for its own sake is unlikely to lead to insights unless it is grounded in understanding behavior. For example, the MB seems overengineered for forming simple odor-value associations—indeed, it evolved to deal with the dynamic complexity of actively responding to fluctuating stimuli streams in real environments. Posing such a problem for a robot should be an effective way to illuminate the key computations involved and to rigorously evaluate new models. It could also result in smarter robots.