ASSISIbf
Animal‌ and‌ robot‌ Societies‌ ‌Self-organise‌
and‌ Integrate‌ by‌ Social‌ Interaction‌ (bees‌ and‌ fish)‌

All posts in Blog

Having established that the basic CASU functionality works as expected, the ASSISIbf team was ready to undertake the first collective-behavior experiments, lead by Rob Mills of Lisboa who had worked out and coded up a number of interesting test cases.

In the first group of experiments, two CASUs were heating opposite corners of the arena, with one of them providing the Bee’s preferred temperature of 36 degrees, and the other one heating to a two degree lower temperature. After the bees had aggregated at the optimal spot, the heating CASUs were turned off, and two new attractive spots were created, in the other two corners of the arena. Again, one spot was only locally optimal, the other one was globally optimal. The experiment was performed in two varinats: one with an abrupt optimum change, as descirbed above, and another where the optima moved along a chain of neigboring CASUs. The goal was to see whether the CASU array can be used to “guide” the bees to the global optimum. Further experiments are needed in order to draw definite conclusions, however, in the performed experiments, the bees’ time to reach the global optimum was decreased with the help of the CASUs.

Now THAT is what I call an agregation!

Now THAT is what I call an aggregation!

The other set of experiments was the first step towards the ambitious ASSISIbf goals of interaction between spatialy-separated societies. Two groups of bees, physically separated in two smaller arenas within the CASU array, were required to coordinate their decision on an aggregation spot. The CASUs were “counting” the bees in their surroundings by means of IR proximity sensors and closing a positive feedback loop by producing more heat when counting more bees. This was the first set of fully autonomous CASU experiments!

So close no matter how far

So close no matter how far!

Additional experiments featured a mixed society, but the details of this experiment will be kept secret for the time being.

This experiment is still Top secret!

Some experimental results are not ready to be publicized yet!

One of the highlights of the workshop was the bee detection and tracking software implemented and tested by Marcelo of EPFL. Due to their high density and unpredictable motion, bee-tracking is a notoriously difficult problem and to the best of our knowledge no robust solutions, weather commercial or  academic, are currently avalable. Well, at least until the end of this week, when Marcelo adapted his fish-tracking tool to tracking bees. It took a lot of coding and some adjustments to the environment

Reliable tracking requires perfect environmental conditions.

Reliable tracking requires perfect environmental conditions.

but the results are more than impressive:

https://www.youtube.com/watch?v=OPHZg52_irA&feature=youtu.be

All in all, the whole ASSISIbf team is satisfied with the progress achieved and confident that they can keep up the dynamic pace set out in the project DoW.


The ASSISIbf team gathers in GRAZ for a training session on the CASU array functionality. The Zagreb team has produced a 3×3 CASU array, the first prototype of the VRRI (Virtual Reality to Reality Interface), the device designed to help us communicate with groups of honeybees in novel ways. The Graz and Zagreb teams have been debugging and finalizing the prototype for the past three weeks, just as the honeybees are starting to wake up from their winter hibernation. Researchers form Lisboa and Lausanne have joined them today, for a week of intensive experimentation.

The brainstorming session that kicked off the training.

The brainstorming session that kicked off the training.

The plan for the week is to perform two types of experiments:

  1. Several individual experiments to showcase the stimuli-generating and sensing capabilities of the CASU array
  2. Collective behavior experiments, closing the loop between the bees and CASUs

Furthermore, EPFL’s Marcelo Elias de Oliveria will modify and test the real-time image tracking system in the Bee-arena. His tracking software is already being successfully used in zebrafish experiments in Paris, and Marcelo will use this week to make the modifications necessary for tracking the bees.

Marcelo is working on the Bee-tracking software, while Rob and Damjan are setting up connections to the CASU Control boards.

Marcelo is working on the Bee-tracking software, while Rob and Damjan are setting up connections to the CASU Control boards.

Martina is preparing the thermal camera for temperature experiments.

Martina is preparing the thermal camera for temperature experiments.

After some overcoming some technical difficulties on day 1, we were able to conduct the first successful experiments on the second day of the workshop. We analyzed the heat propagation properties of the arena by heating and cooling one CASU and observing the arena with a thermal camera.

Martina and Rob analyze heat propagation in the CASU array.

Martina and Rob analyze heat propagation in the CASU array.

In the second experimental set of the day, bees and CASUs made first contact. We released a group of 40 bees into the arena and let them wander around. The CASUs were programmed to signal bee detections (made by the IR sensors) with the diagnostic LED.

Bee detections trigger CASU LED signals.

Bee detections trigger CASU LED signals.

 


Organizers: José Halloy, Universite Paris Diderot; Thomas Schmickl, Karl-Franzens-University Graz; Stuart Wilson, University of Sheffield.

Our workshop had approx. 20 people audience, similar to the other satellite workshops at Living Machines 2014, 29th August 2014 in Milano.

by Thomas Schmickl:

Tim Landgraf:

Tim announces to play devil’s advocate and coining arguments against robotic-animal societies as scientific tools. He shows the waggle dance and Karl v. Frisch’s experiments to decipher them. Then he continues with explaining RoboBees details: A plotter machine was used as a basis for building the bee robot. To this machine his team made extensions like heating up the robot, flapping wings and offering sugar water probes to bees. Some nice movies of RoboBees, successes and non-successes were shown. The team used harmonic radar to track the bees’ flights. Tim explains an experiment where they had 2 real rewarding food sources and one virtual food source which never rewarded a bee but which was advertised by RoboBee as a good food source. The bees followed the dances but did not often respond to the food source, they were often flying to one of the not-advertised food source they have been rewarded before (by flying there and drink sugar water there). Tim brings huge list of what is unknown about aspects of the experiments. Many unknown things (variables) in the robot and also many unknown things (motivation, etc.) on the bee side. He calls them, due to their vast amount, likely not explorable in an exhaustive way as their assessment will need specific experimentation for each one of them. In his new upcoming project they plan to observe all dancing interactions in a full colony (who danced with whom and where and how long?), using several high-performance cameras and computer clusters. This way he plans to generate a Facebook of a beehive, or maybe a honeybee version of NSA’s prism program? :-)

Andy Adamatzky:

Presenting Physarum machines, which are basically growing slime moulds, but in his words, they are a biological implementation of Kolmogorov machines (1950s), which were proposed as an alternative to Turing machines. Slime mold grows with a set of oat-flake depots and forms structures close to minimum spanning trees. In the following he describes several setups/manipulations that have to be done on Physarum machines that have to be implemented to allow them to grow into specific types of structures. How to program a Physarum machine? There is not one single answer: The team uses repellents for spreading and merging. In addition, gradients can be used. Also, aromatic substances can be used to guide/control the slime mold growth. He showed several experimental comparisons of continents/counties with turnpike networks and the corresponding network that arises in a slime mold if the oat flakes were placed in similar relative positions to each other compared to the larger cities of the chosen countries. Also 3D-printed models that depict mountains were used and similarity between the real-world traffic network and the slime-mold network can be found. In the following, Andy showed how slime mold can be used for a bio-hybrid sensor hair, exploiting the fact that slime mold changes its oscillation frequency after being touched. Also, a slime mold-based color sensor and a slime mold as chemical sensor is presented. Slime mold can also act as a connection, like a cable. It was finally kept open if it will be possible to rebuild even galaxies or the whole universe (including big bang?) with slime mold :-). A more serious outlook was to introduce the concept of emotions to Physarium machines.

= LUNCHBREAK =

Jose Halloy:

Jose starts with a round-trip on classical ethological experimentation (e.g. Tinbergen’s usage of dummies to trigger behavioral responses). (Of course, he is again omitting Konrad Lorenz on that issue. :-( ) Jose is presenting a wrap-up of the results obtained from his experiments with a mixed society built of mobile robots and cockroaches (Science 2007) and an extension of the model he published in PNAS 2006. Jose extrapolates on the importance of having a behavior-based model to be able to introduce robots into an animal society. This is important to support the goal that the animals should accept the robot among their own society. He describes the difficulty/impossibility to come from a microscopic model to a macroscopic one or the other way around. In a new approach, he fits/tunes parameters of a finite-state machine or a Markov model by using an evolutionary algorithm to alter the model’s parameters until it produces the macroscopic system behavior. Finally, a system was discussed that derives the finite state machine also from empirically observed data to close the loop. This is an important step, as deriving model and parameter values from observed data in an automated way may finally lead to an autonomous and automatic research machinery. Science without scientist? Maybe.

Adam Miklosi:

Adam talks about dogs, approaching the topic first from the historical side on the when and where and why of the domestication of dogs by humans. Then he reports of the swarmix project which combines flying drones and dogs and humans for search & rescue operations. He raises several important questions like “why should a dog follow a flying robot in a cooperative way?”. A series of experiments was shown that used simple radio-controlled cars (“mechanic robots”), radio-controlled cars that also showed weird, non-perfect behaviors (“social robots”) and humans that behaved robot like (“human robots”). It was first found that there are significant differences in the reaction behaviors (like attention) the dogs show towards those agents. Finally, it was found that real humans can interfere with dogs’ food choice behavior while “mechanic robots” cannot and “social robots” can do at least a bit.

Stuart Wilson:

Stuart talks about genes, neurons & synapses first, stressing the fact that self organization allows 10^5 genes can code for 10^11 neurons and 10^14 synapses, and the consequences that this might have for the spatial self-organization of brain systems. He starts with a tutorial on self-organizing cortical maps, then examples for self-organizing sensorimotor maps, and finally self-organizing behavior. He shows the Kohonen SOM (SOM=Self-organizing maps) approach and compares the finally emerging pattern (folded network) to images of virtual cortex, which shows similarities. Then he continues with the LISSOM approach which extends the concept of SOM with long range inhibitory connections and short range excitatory connections. Using this method, which is close to the famous Turing model of morphogenesis (see also Murray for skin/fur patterns or D’arcy Thompson “On Growth and Form”), its is possible to grow cortex models that behave and look very similar to the visual cortex of vertebrates. With mice & rats, this is not found (exception of the rule?) but similar patterning is found in the way how the placing of the whisker hairs in the rat’s face and their direction preference are represented in the rat’s brain. Also here, LISSOM can be used to train/generate comparable maps. He shows an implementation of such LISSOM maps in a moving rodent-like robot that senses its environment through its whiskers. Finally, Stuart shows how rat puppies live in the first 2 weeks of their life after being born, what is the timespan in which those brain nets evolve: They are cuddling together in the nest with gazillions of touch contacts. Finally he shows the thermoregulating aspect of the cuddling and a model that also incorporates the behavior of the pups, which in fact arises also from their neural maps of the environment and their inclusion in the sensorimotor loop. So he suggests a major neuro-behavioral-physical-developmental feedback loop regulating all those aspects. Rat cuddling groups seem a bit similar to the honeybees’ winter cluster, as my (T.S.) final thought.

Thomas Schmickl:

Finally I gave a presentation of the aggregation behavior of young honeybees which manage to find optimal locations in a temperature arena in a decentralized and swarm-intelligent way. I introduced to the first outcomes of our project ASSISIbf, including novel findings on bees’ reactions to light emissions and vibration. In several movies, I showed the first generation prototypes of ASSISIbf honeybee robots (CASUs) in action and in interaction with honeybees. Several videos demonstrated that we can control honeybee aggregations by temperature emissions and also by vibrations which are produced by CASUs. In a final demonstration, first experiments were shown in which we were able to close the loop for the first time: Robots reacted to the local presence of bees and bees reacted in turn to those robots locally. In addition, I showed the derivation of classification algorithms that can be used to predict the local honeybee density around a CASU as well as first steps into using evolutionary computation to generate CASU software controllers in order to modulate the collective behavior of the honeybee society associated with them.


The ASSISIbf project aims to create a new class of bio-hybrid systems by integrating artificial agents capable of autonomous computation, called CASUs (Combined Actuator-Sensor Units), with animal societies (honeybees and fish). The CASUs are expected to become powerful tools that will provide new experimental insights into collective phenomena.

Programability and autonomous computation are key capabilities for making the CASUs useful experimental tools. Researchers with a background in biology or collective systems theory should be able to quickly implement and deploy algorithms for interacting with the animals. Furthermore, because experimenting with animals is time consuming and can have seasonal constraints (e.g. experimenting with honeybees is only possible between May and September), simulation tools are necessary to facilitate algorithm development. Transitioning from simulation to actual hardware should be done seamlessly, without requiring changes to user code.

Overview of software components.

Figure 1: Overview of software components.

To meet the above requirements, we have developed the software infrastructure shown in Figure 1. The key software component is the middleware, built on top of the ZeroMQ networking library. Google’s Protocol buffers are used for message serialization. It provides a flexible and efficient message-based communication system, which decouples user code from low-level device details. This decoupling is the key ingredient for enabling a seamless transition between simulation and hardware. On the user side, the Python programming language has been chosen as it is well suited to a fast development cycle and has an extensive ecosystem of available libraries.

Initial experiments have confirmed the flexiblity of the software design. We were able to test CASU control algorithms in simulation and afterwards deploy the same code to the CASU hardware, without any modification. The observed experimental behavior qualitatively matched the simulation results.

 


				

Introducing the Fish Robots

Categories: Blog
Comments Off

One of the goals of the ASSISIbf project is to create lures that can interact with zebrafish in an aquarium. These lures are ready now and are tested in 1m x 1m aquariums with 10 cm of water. The lure consist of a passive device placed in the water and moved by a robot placed under the aquarium. In respect to other similar devices, our robot is very compact. The length of the robot is 43 mm, the width 22 mm and the height 67 mm. The mass of the mobile robot is 80 grams. Moreover the robot does not need batteries, getting continuous power supply by brushes used to retrieve the power from conductive plates above and under the robot. This allows to run long time experiments with multiple robots. Preliminary experiments with zebrafish will be presented at the Robio 2014 conference in Bali, Indonesia.

fishCASU

Fish lure with passive device within and moving robot under the aquarium.


The interests of groups and the individuals in those groups are frequently in tension, whether in collective decision making or in social evolution (e.g. the evolution of cooperation).  Understanding how nature resolves such conflicts can be challenging and raises involved questions.  For instance: what population structures are sufficient to promote cooperative behaviour amongst what are inherently selfish individuals [1, 2]? How might those population structures themselves evolve as a consequence (and cause) of the cooperative traits [3]?  And what circumstances provide benefits for group living [4]?

Sasaki et al [5] investigate one set of behaviours that address this last question: they study how social ants make decisions about the quality of nest sites, examining the conditions under which a group is able to make better decisions than individuals.

Temnothorax ants, individually paint marked inside a laboratory nest (http://en.wikipedia.org/wiki/Spatial_organization)

Temnothorax ants, individually paint marked inside a laboratory nest (http://en.wikipedia.org/wiki/Spatial_organization)

In their experiments, they present the ants with two potential nests that vary in quality — in light levels, in this case (darker nests are preferred). One option is kept constant, while the alternative nest becomes progressively worse, which makes deciding between them easier and easier.  They challenge either a single ant or a colony (of 20–250 ants) to decide between the nests. In keeping with prior work on collective decision making, they find that the collectives are able to make better decisions than individual ants faced with the same task — but only under some conditions.  Specifically, when the nests are similar in quality and the decision is difficult, groups outperform individuals.  For an easy decision individual ants do better.

The authors suggest that this surprising result comes down to the speed at which a colony can make a decision in comparison to a sole ant.  Individuals of a colony need not visit more than one site (because of the competitive recruitment process) and so a colony can effectively make decisions in parallel.  Conversely, a sole individual needs to make many visits to each site before being able to accurately compare, and they (appear to) make a decision before this process reliably splits the symmetry for sites of similar quality.

On the other hand, individuals are able to rapidly assess easy decisions, and need not pay any cost for aggregating social information, in contrast to the colony.  In short: a decision that does not need a population will be made more efficiently by an individual.

A previous study by the same team showed that difficulty could be presented to collectives of ants in an alternative manner [6]: by presenting the organisms with a small (2) or large (8) number of possible nests.  Here, both individual ants and colonies were able to make good decisions in the small case but only collectives were able to maintain good quality decision making when presented with more options.

In ASSISIbf we are interested in learning how we can modify the overall decisions made by animal groups via the interaction of robots and those groups.  If we are to understand how mixed societies may be most productive, it is important that we understand when group-level decisions are likely to be successful. Sasaki’s studies [5,6] show us that collectives might not always be able to make the best decisions (see also Kao & Couzin [7]).  Interestingly, in our mixed societies, the robots have the potential to overcome poor group-level decisions by shaping the information available to the animals.  Pinpointing this kind of “weakness” may yield a substantial impact on overall outcomes using relatively little resource.

 

Find out more on Takao Sasaki’s research, Stephen Pratt’s lab, and see the ants in action.

[1] Nowak, M. A., & May, R. M. (1992). Evolutionary games and spatial chaos. Nature, 359(6398), 826-829. http://dx.doi.org/10.1038/359826a0

[2] Santos, F. C., Pacheco, J. M., & Lenaerts, T. (2006). Evolutionary dynamics of social dilemmas in structured heterogeneous populations. Proceedings of the National Academy of Sciences of the United States of America, 103(9), 3490-3494. http://dx.doi.org/10.1073/pnas.0508201103

[3] Powers, S. T., Penn, A. S., & Watson, R. A. (2011). The concurrent evolution of cooperation and the population structures that support it. Evolution, 65(6), 1527-1543. http://dx.doi.org/10.1111/j.1558-5646.2011.01250.x

[4] Calcott, B. (2008). The other cooperation problem: generating benefit.  Biology & Philosophy, 23(2), 179-203. http://dx.doi.org/10.1007/s10539-007-9095-5

[5] Sasaki, T., Granovskiy, B., Mann, R. P., Sumpter, D. J., & Pratt, S. C. (2013). Ant colonies outperform individuals when a sensory discrimination task is difficult but not when it is easy. Proceedings of the National Academy of Sciences, 110(34), 13769-13773. http://dx.doi.org/10.1073/pnas.1304917110

[6] Sasaki, T., & Pratt, S. C. (2012). Groups have a larger cognitive capacity than individuals. Current Biology, 22(19), R827-R829. http://dx.doi.org/10.1016/j.cub.2012.07.058

[7] Kao, A. B., & Couzin, I. D. (2014). Decision accuracy in complex environments is often maximized by small group sizes. Proceedings of the Royal Society B: Biological Sciences, 281(1784), 20133305. http://dx.doi.org/10.1098/rspb.2013.3305


The teams spent the last two days of the training working hard to complete experiments and prepare materials for the upcoming Review Meeting.
The guys from Paris7, supported by Frank Bonnet of EPFL, learned how to program a fish robot using ASEBA IDE. They implemented developed algorithms for the control of a fish-CASU. They joined fish-CASU and real fish in a water tank and ran some nice experiments.

Water tank with fish-CASU and real fish

Water tank with fish-CASU and real fish

Thomas Schmickl and Ronald Thenius invested their time in exploring the capabilities of software and hardware developed by UNIZG. Eventually, several scenarios were conducted, both in simulation and on real hardware. In the real experimental setup small robots, called Bristelbots (HEXBUG Nano), were used instead of real bees.

"Bee" arena for experiments with bee-CASUs

“Bee” arena for experiments with bee-CASUs

Payam Zahadat made her best to implement some cool algorithms on the Thymio-II robots. Her robots were improving from day to day both visually and functionally. Marcelo Elias de Oliveira from EPFL gave her a support by implementing algorithms for visual tracking of robots, thus providing a pose information of each Thymio robot. Again, several experiments were performed using the developed setup.

Final arena for experimenting with Thymio-II robots

Final arena for experimenting with Thymio-II robots

Luis Correia and Fernando Silva collaborated with Damjan Miklić to implement a heat stimuli in the existing simulator. They implemented a model for heat dissipation and superposition in 2D plane. They ran many simulations to verify their model.

All the experiments were recorded and will be prepared for the review meeting held in Paris next month. We can surely expect some very nice videos.

 


Through the second day of the training, the teams continued to work on the tasks discussed and planned on the previous day.
The EPFL and PARIS7 researchers worked on experiments of connecting fish CASU and virtual fish. They succeeded in implementing the fish behavioral algorithm on both mock-up fish and virtual fish.

Mock-up fish in the aquarium

Mock-up fish in the aquarium

Fish-CASU with the virtual fish in the developed GUI

Fish-CASU with the virtual fish in the developed
GUI

Payam Zahadat, researcher from UNIGRAZ, worked on the Thymio-II robots, with the goal to emulate the bee behavior and functionality of the CASUs. Some of her robots are getting more and more similar to bees, with antennas grown up over the night.

Payam Zahadat working with Thymio robots

Payam Zahadat working with Thymio robots

The rest of the UNIGRAZ team, supported by the FER colleagues, continued to discover the possibilities of the developed simulator and bee CASU. The teams we very excited after successfully running some code both in simulator and on CASU. Meanwhile, Luis and Fernando from UNILIS continued to enhance the capabilities of the simulator environment.

Ronald Thenius working with the simulator and bee CASUs

Ronald Thenius working with the simulator and bee CASUs

All partners took the lecture given by prof. Robert Wood from Wyss Institute for Biologically Inspired Engineering, Harvard, USA. He was talking about the development of the flying bee robot – RoboBee. Some astonishing results in design, actuation and control of the RoboBee have been presented. The highlight of the presentation was the new methodology for manufacturing and assembling the small
flying robots, based on multi-layer mechanical structure and folding-based assembly.

The day finished with an informal meeting in the Salsa club in the center of Lausanne, where Damjan and especially Marcelo showed some really nice dancing moves. The rest of the team is encouraged to take dancing courses for the future social meetings.


The first Training day started with the presentation of the resources of the Robotic Systems Laboratory managed by ASSISIbf PI Francesco Mondada. Some very nice service robots have been seen as well as the aquarium which Francesco’s team use for fish experiments. Francesco then presented Aseba Studio – core of the software architecture for the fish CASUs. He used programming of the Thymio-II robots for demonstrating the event-based nature of the Aseba IDE.
Damjan Miklić, researcher from the FER team, made the practical presentation of the capabilities of the developed software architecture (simulator, Python API) and bee CASU hardware. The developed system allows Python-based CASU controllers to be executed both in simulated and real CASUs, without a need to change a single line of code. Soon after, some Python code, previously used by the UNIGRAZ team in simulation environment, was downloaded and executed on the CASU hardware.

Damjan Miklić presenting the software and hardware architecture of the bee CASUs

Damjan Miklić presenting the software and hardware architecture of the bee CASUs

The teams were then divided in groups, each dealing with (for now) separated problems. The fish team, consisting from EPFL and Paris partners, worked on integrating fish behavioral algorithm inside the EPFL experimental setup. The goal was to connect the real fish CASUs with the virtual fish. Both, the fish CASU and virtual fish, are driven by the probability-based algorithm, which was presented by Bertrand Collignon on the General Assembly. After only few hours of work the results were more than promising, so we can soon expect some exciting videos.

The joint forces of the EPFL and PARIS7 teams

The joint forces of the EPFL and PARIS7 teams

While the bee CASUs are being prepared by the FER researchers for the planned experiments, the UNIZGRAZ and UNILIS teams started to explore the Thymio-II robots. Since these robots have several proximity sensors (like the bee CASUs), the idea was to emulate interaction between bees and bee CASUs using several Thymio robots. The experimental arena turned out to be very small, so we can expect very precise collision avoidance algorithms.

The experimental arena for the Thymio robots

The experimental arena for the Thymio robots

Meanwhile, Serge Kernbach from Cybertronica collaborated with a FER member to integrate and test electromagnetic emitters with the current CASU setup. The emitter was successfully controlled via the BeagleBone Black and the intensities of both electric and magnetic fields measured. In addition, the capability of heating the CASU with the magnetic coils was tested and verified.

The bee CASUs with the equipment for measuring EM fields

The bee CASUs with the equipment for measuring EM fields


For group living animals, collective behaviors often depend on perception of environmental cues and social interactions among group members. As large animal societies often lack a global communication system, one of the most challenging problems faced by these groups is probably the coordination of all group members. It requires gathering the information about the environmental opportunities, information transfer between group members and information processing by individuals. In large groups, individuals mostly respond to local information since they only have access to limited knowledge and are unable to directly compare the different environmental opportunities. Thus, collective pattern displayed by such groups frequently rely on decentralized processes based on amplifying loops that are base on direct interactions between individuals or through intermediate signals. Such mechanisms have been shown to rule various collective activities in numerous species, including humans (Figure 1, Bonabeau et al., 1997, Camazine et al., 2001, Couzin and Krause, 2003, Sumpter, 2006 and Moussaïd et al., 2009).

Figure 1. Examples of collective behaviors. (a) Fire ants forming a raft on water. (b) Honeybee swarm. (c) Starling murmuration. (d) Gnu migration. (e) Sardine school. (f) Human crowd.

Figure 1. Examples of collective behaviors. (a) Fire ants forming a raft on water. (b) Honeybee swarm. (c) Starling murmuration. (d) Gnu migration. (e) Sardine school. (f) Human crowd.

Among them, collective motion has been studied by researchers from numerous fields like biology, physics, computer science and robotics. It has raised an increasing interest these last decades and has led to a huge amount of experimental literature in fish schools, bird flocks, mammal herds or insect colonies. Facing this amount of data, scientists tried to infer the individual rules used by group members to produce the observed collective patterns. For this purpose, the increasing efficiency of the informatics and computers provide a very helpful tool by allowing the development of individual-based models.
The first models described group members interactions according to their spatial position: the movement of an individual is based on the position of a subset of its neighbors. In its pioneering work in computer animation inspired by the Particle Systems of Reeves (1983), Reynolds proposed in 1987 a sufficient set of rules to reproduce flocking behavior of birds by describing the motion of geometrical objects called boids, a contraction of bird-oids objects. These agents follow three simple rules: “attraction” –boids are attracted by other agents- “velocity matching” –boids align their velocity with nearby neighbors and “repulsion” –boids avoid collisions with too close neighbors (Figure 2).

Figure 2. Behavioral rules followed by the boids (in Reynolds, 1987; http://www.red3d.com/cwr/boids/). (a) Cohesion. (b). Velocity matching. (c) Repulsion.

Figure 2. Behavioral rules followed by the boids (in Reynolds, 1987; http://www.red3d.com/cwr/boids/). (a) Cohesion. (b). Velocity matching. (c) Repulsion.

The first demo of this model reproduces a flock of boids moving collectively and avoiding obstacles (http://www.siggraph.org/education/materials/HyperGraph/animation/art_life/video/3cr.mov) while it was firstly used for a computer animation movie in “Stanley and Stella in Breaking the ice” produce in 1987 (http://www.youtube.com/watch?v=3bTqWsVqyzE). A few years later, the same algorithm was used to produce bat swarms and penguins armies invading Gotham City in Tim Burton’s movie “Batman Returns” (Lebar Bajec and Heppner 2009).
In parallel to the development of computer-simulated flocks, the study of collective motion has also given rise to an increasing interest by physicists. In their wel1-known study of 1995, Vicsek and his collaborators focused particular attention on the emergence of self-ordered motion in systems of particles mimicking biological interactions. They described the motion of self-propelled particles interacting with other particles situated within a distance. At each time step, interactions forces between a particle and its neighbors are calculated and the particle update its position along the resulting force vector. In this model (and the numerous ones derived from it since almost 20 years), the main process is divided into 3 steps: i) gathering information, ii) processing information and iii) update its position.
Multiple hypotheses have been proposed to calculate the subset of individuals who influence the motion of a focal fish. As described above, the first ones were based on a metric perception: a fish takes information from all individuals situated within a distance. A second rule, called “topological perception”, states that fish move according to the position of their proximal neighbors. Another hypothesis based on Voronoi diagrams described fish movement as a function of a tessellation of their neighbors (Figure 3, for a comparison between these three postulates, see Strandburg-Pashkin et al., 2013). Although these models produce quite exhaustively the global patterns observed in flocking populations, we still lack clear experimental evidence to validate them.

Perception hypothesis

Figure 3. Possible types of perception. (a) Metric perception: all neighbors within a fixed distance. (b) Topological perception: the n proximal neighbors (5 in our example). (c) Voronoi perception: all individuals connected to the focal individual by a Delaunay triangulation.

For this reason, current experimental and theoretical research is now investigating the mechanism used by fish through a “sensory” perspective (Lemasson et al., 2009; Lemasson et al., 2013; Strandburg-Pashkin et al., 2013). In these models, the position of the neighbors of a fish are not described in Cartesian coordinates but are represented in the perception field of the focal fish and are thus projected on a simplified retina (Figure 4). Then, the model simulates the neurological processes to interpret the images received by the retina. Although the neurological processes remain mostly not understood, we are closer to an effective description of the individual behavior.

Figure 4. Example of visual perception. We simulated the motion of 20 agents in a finite space and computed the visual field of a focal individual (the green one). All agents are represented in the visual field according to their body length and orientation.

Figure 4. Example of visual perception. We simulated the motion of 20 agents in a finite space and computed the visual field of a focal individual (the green one). All agents are represented in the visual field according to their body length and orientation.

In parallel to the understanding of information processing by individuals, this approach is a new step towards bio-inspired algorithms that can be implemented in robotic agents. Indeed, a major scientific challenge, that is at the core of the ASSISIbf project, is to build artificial systems that can perceive, communicate to, interact with and adapt to animals (Schmickl et al., 2013). To do so, we need to develop artificial agents that communicate through appropriate channels corresponding to specific animal traits but also that correctly perceive and interpret signals emitted by the animals (Halloy et al., 2013, Mondada et al., 2013). This was firstly achieved in 2007 by building bio-inspired artificial cockroaches that where able to sense the presence of congeners and to adapt their behavior following a bio-inspired algorithm (Halloy et al., 2007). In this perspective and to continue in this path, the development of perception-based models is an obligatory step toward intelligent artificial systems capable of closing the loop of interaction between animals and robots.

References

Bonabeau, E., Theraulaz, G., Deneubourg, J.L., Aron, S., Camazine, S., 1997. Self-organization in social insects. Trends Ecol. Evol. 12, 188-193.

Camazine, S., Deneubourg, J.L., Franks, N.R., Sneyd, J., Theraulaz, G., Bonabeau, E., 2001. Self-organization in biological systems, Princeton University Press.

Couzin, I.D., Krause, J., 2003. Self-organization and collective behavior in vertebrates. Adv. Stud. Behav. 32, 1-75.

Halloy, J., Mondada, F., Kernbach, S. and Schmickl, T. 2013. Towards bio-hybrid systems made of social animals and robots. In Biomimetic and Biohybrid systems, Second International Conference, Living Machines 2013, Eds. Lepora, N.F., Mura, A., Krapp, H.G., Verschure, F.M.J. and Prescott, T.J., London, UK, Proceedings.

Halloy, J., Sempo, G., Caprari, G., Rivault, C., Asadpour, M., Tâche, F., Saïd, I., Durier, V., Canonge, S., Amé, J.M., Detrain, C., Correll, N., Martinoli, A., Mondada, F., Siegwart, R. and Deneubourg, J.L., 2007. Social integration of robots into groups of cockroaches to control self-organized choices. Science, 318, 1155-1158.

Lebar Bajec, I. and Heppner, F.H. 2009. Organized flight in birds. Anim. Behav., 78, 777-789.

Lemasson, B.H., Anderson, J.J. and Goodwin, R.A., 2009. Collective motion in animal groups from a neurobiological perspective: The adaptive benefits of dynamic sensory loads and selective attention. J. Theor. Biol., 261, 501-510.

Lemasson, B.H., Anderson, J.J. and Goodwin, R.A., 2013. Motion-guided attention promotes adaptive communications during social navigation. Proc. R. Soc., 280, 20130304.

Mondada, F., Halloy, J., Martinoli, A., Correll, N., Gribovskiy, A. Sempo, G., Siegwart, R. and Deneubourg, J.L. 2013. A general methodology for the control of mixed natural-artificial societies. In Handbook of collective robotics, Ed. Kerbach, S., Pan Stanford Publishing.

Moussaïd, M., Helbing, D., Garnier, S., Johansson, A., Combe, M., Theraulaz, G., 2009. Experimental study of the behavioural mechanisms underlying self-organization in human crowds, Proc. R. Soc. B, 276, 2755–2762.

Reeves, W.T. 1983. Particle systems – A technique for modeling a class of fuzzy objects. Comp. Graph., 17, 359-375.

Reynolds, C. 1987. Flocks, herds, and schools: a distributed behavioural model. Comp. Graph, 21, 25-34.

Schmickl, T., Bogdan, S., Correia, L., Kernbach., S., Mondada, F., Bodi, M., Gribovskiy, A., Hahshold, S., Miklic, D., Szopek, M. Thenius, R. and Halloy, J. 2013. ASSISI: Mixing animals with robots in a hybrid society. In Biomimetic and Biohybrid systems, Second International Conference, Living Machines 2013, Eds. Lepora, N.F., Mura, A., Krapp, H.G., Verschure, F.M.J. and Prescott, T.J., London, UK, Proceedings.

Strandburg-Pashkin, A., Twomey, C.R., Bode, N.W.F., Kao, A.B., Katz, Y., Ioannou, C.C., Rosenthal, S.B., Torney, C.J., Wu, H.S., Levin, S.A. and Couzin, I.D., 2013. Visual sensory networks and effective information transfer in animal groups. Curr. Biol., 23, R709-R711.

Sumpter, D.J.T., 2006. The principles of collective animal behavior. Philos. T. Roy. Soc. B, 361, 5-22.

Vicsek, T., Czirok, A., Ben-Jacob, E., Cohen, I. and Shochet, O., 1995. Novel type of phase transition in a system of self-driven particles. Phys. Rev. Lett., 75, 1226-1229.