Animal‌ and‌ robot‌ Societies‌ ‌Self-organise‌
and‌ Integrate‌ by‌ Social‌ Interaction‌ (bees‌ and‌ fish)‌

All posts in Blog

The BeeFish game is now available!

Categories: Blog
Comments Off

The University of Lisbon (UNILIS) team developed a videogame called BeeFish for the dissemination of the ASSISIbf project. The BeeFish game, for mobile devices, consists of two different gameplays, one with bees and CASU and other with fish and CASU, each one with several levels. In both gameplays, players control CASUs to guide the animals to their goal, somehow mimicking the experiments with real animals that are carried out by researchers of the project.

Screenshots of the BeeFish game.

Screenshots of the BeeFish game.


The game is now available for free download for both Android and Apple iOS smartphones and tablets. To easily find the game:

* for android, search  for “BeeFish FCUL” on the Play Store or use this link:
* for iOS, search for “BeeFish FCUL” in the iOS app store or use this link:

The game is already being used at UNILIS to disseminate the ASSISIbf project.


The ASSISIbf project and the BeeFish game were presented to high school students that visited UNILIS.

The ASSISIbf project and the BeeFish game were presented to high school students that visited UNILIS.

ASSISIbf Training FER 2015

Categories: Blog
Comments Off

On 16th of December 2015 the ASSISIbf LARICS team organized a training at the University of Zagreb, Faculty of Electrical Engineering and Computing (FER), for local students. The purpose of the training was to present, through selected lectures and a practical session, the overall goals of the ASSISIbf project and the results accomplished in the first half of the project. There were 12 students attending the training, who were  divided in 4 groups of 3 students for the practical session. LARICS staff (Prof. Stjepan Bogdan, Dr. Damjan Miklić, Karlo Griparić, Tomislav Haus, Damir Mirković) was responsible for giving lectures and support during the practicals.

Prof. Bogdan gave a talk on the general concepts of the project. The students could learn about the FET programme, the FoCAS initiative, collective adaptive systems and the methodology applied within the ASSISIbf project. Dr. Miklić presented the software architecture developed for facilitating ethological experiments on honeybees. The emphasis was on distributivity and modularity of the developed framework. Students got familiar with the ZeroMessageQueue (ZMQ) framework that is used as middleware in our system. Google Protobuf was introduced to the students, as a powerful tool for message serialization/deserialization. The lectures were concluded by Karlo Griparić’s talk on the developed Combined Actuator Sensor Unit (CASU) arena. The students got familiar with the techniques that we use to design the arena, ranging from mechanical CAD design to PCB design and embedded system design.

Prof. Bogdan giving the lecture on ASSISIbf concepts.

Prof. Bogdan giving the lecture on ASSISIbf concepts.

In the practical session, the task for the students was to program CASUs. LARICS staff prepared the arena with 4 fully functional CASUs. Each group was given its own CASU to program. Bristlebots (HEXBUG Nano), tiny robots powered by a battery and vibration motor, were used to mimic the presence of bees. After the presentation of the basic CASU controller, which included the introduction of the developed Python API, the students were given 3 tasks to complete.

The first task was the calibration of infrared (IR) sensors that we use to detect bees. The second task was to control the CASU temperature and color based on the number of detected robots. In particular, if a bristlebot is detected, the temperature reference should be increased by 0.5 °C (positive feedback), and if there is no detection in a specified time period (e.g., 10s), the reference should be decreased by 0.5 °C. The color of the embedded LED should be changed with respect to the measured CASU temperature. The first two tasks were successfully completed by each group.

Students worked in groups to solve the training tasks. We used a thermocamera live footage to monitor the arena temperature.

Students worked in groups to solve the training tasks. We used a thermocamera live footage to monitor the arena temperature.

The final task was more ambitious and included the estimation of the number of bristlebots in the arena, based on IR detections. The students were left free to develop any kind of algorithm and it was interesting to see how they quickly employed their knowledge from machine learning. Eventually, one of the group implemented a linear regression algorithm and started the learning procedure by collecting the IR data while altering the number of bristlebots in the arena.

Bristlebots were used to mimic bee motion. One of the tasks was to implement a Bristelbot counting algorithm.

Bristlebots were used to mimic bee motion. One of the tasks was to implement a Bristelbot counting algorithm.

We hope that the insights that the students got during the training improved their programming skills and could help them in their ongoing and future projects. We also hope that we managed to familiarize students with daily activities typical for scientific projects, such as ASSISIbf, and encourage them to continue their education and start a career in science.


Introducing RiBot

Categories: Blog
Comments Off

One year ago, we posted an article on this same blog that introduced a wheeled mobile robot that could move passive lures inside an aquarium to interact with living zebrafish Danio rerio. During the past year, we have been able to design an active lure, RiBot, that is equipped with an actuated tail in order to mimic the fish body movements underwater. RiBot is a combination of the words “Riba” (рыба) and “Robot” (робот) that means respectively fish and robot in Slavic language. This remotely controlled and waterproof device has a total length of 7.5 cm which is only 1.8 times the size of a zebrafish and is able to beat its tail with different frequencies and amplitudes, while following the group of living animals using the external wheeled mobile robot that is coupled with the robotic lure using magnets. An infrared universal TV control with RC5 protocol can be used to change the different beating tail modes. The robotic lure is also equipped with a rechargeable battery and has an autonomy of more than one hour.
Preliminary experiments of mixed societies of an actuated lure combined with an external wheeled robot and zebrafish Danio rerio will be presented at the SWARM 2015 conference in Kyoto, Japan


The first prototype of RiBot, compared with a zebrafish Danio rerio used during the experiments. (c) EPFL

Having established that the basic CASU functionality works as expected, the ASSISIbf team was ready to undertake the first collective-behavior experiments, lead by Rob Mills of Lisboa who had worked out and coded up a number of interesting test cases.

In the first group of experiments, two CASUs were heating opposite corners of the arena, with one of them providing the Bee’s preferred temperature of 36 degrees, and the other one heating to a two degree lower temperature. After the bees had aggregated at the optimal spot, the heating CASUs were turned off, and two new attractive spots were created, in the other two corners of the arena. Again, one spot was only locally optimal, the other one was globally optimal. The experiment was performed in two varinats: one with an abrupt optimum change, as descirbed above, and another where the optima moved along a chain of neigboring CASUs. The goal was to see whether the CASU array can be used to “guide” the bees to the global optimum. Further experiments are needed in order to draw definite conclusions, however, in the performed experiments, the bees’ time to reach the global optimum was decreased with the help of the CASUs.

Now THAT is what I call an agregation!

Now THAT is what I call an aggregation!

The other set of experiments was the first step towards the ambitious ASSISIbf goals of interaction between spatialy-separated societies. Two groups of bees, physically separated in two smaller arenas within the CASU array, were required to coordinate their decision on an aggregation spot. The CASUs were “counting” the bees in their surroundings by means of IR proximity sensors and closing a positive feedback loop by producing more heat when counting more bees. This was the first set of fully autonomous CASU experiments!

So close no matter how far

So close no matter how far!

Additional experiments featured a mixed society, but the details of this experiment will be kept secret for the time being.

This experiment is still Top secret!

Some experimental results are not ready to be publicized yet!

One of the highlights of the workshop was the bee detection and tracking software implemented and tested by Marcelo of EPFL. Due to their high density and unpredictable motion, bee-tracking is a notoriously difficult problem and to the best of our knowledge no robust solutions, weather commercial or  academic, are currently avalable. Well, at least until the end of this week, when Marcelo adapted his fish-tracking tool to tracking bees. It took a lot of coding and some adjustments to the environment

Reliable tracking requires perfect environmental conditions.

Reliable tracking requires perfect environmental conditions.

but the results are more than impressive:

All in all, the whole ASSISIbf team is satisfied with the progress achieved and confident that they can keep up the dynamic pace set out in the project DoW.

The ASSISIbf team gathers in GRAZ for a training session on the CASU array functionality. The Zagreb team has produced a 3×3 CASU array, the first prototype of the VRRI (Virtual Reality to Reality Interface), the device designed to help us communicate with groups of honeybees in novel ways. The Graz and Zagreb teams have been debugging and finalizing the prototype for the past three weeks, just as the honeybees are starting to wake up from their winter hibernation. Researchers form Lisboa and Lausanne have joined them today, for a week of intensive experimentation.

The brainstorming session that kicked off the training.

The brainstorming session that kicked off the training.

The plan for the week is to perform two types of experiments:

  1. Several individual experiments to showcase the stimuli-generating and sensing capabilities of the CASU array
  2. Collective behavior experiments, closing the loop between the bees and CASUs

Furthermore, EPFL’s Marcelo Elias de Oliveria will modify and test the real-time image tracking system in the Bee-arena. His tracking software is already being successfully used in zebrafish experiments in Paris, and Marcelo will use this week to make the modifications necessary for tracking the bees.

Marcelo is working on the Bee-tracking software, while Rob and Damjan are setting up connections to the CASU Control boards.

Marcelo is working on the Bee-tracking software, while Rob and Damjan are setting up connections to the CASU Control boards.

Martina is preparing the thermal camera for temperature experiments.

Martina is preparing the thermal camera for temperature experiments.

After some overcoming some technical difficulties on day 1, we were able to conduct the first successful experiments on the second day of the workshop. We analyzed the heat propagation properties of the arena by heating and cooling one CASU and observing the arena with a thermal camera.

Martina and Rob analyze heat propagation in the CASU array.

Martina and Rob analyze heat propagation in the CASU array.

In the second experimental set of the day, bees and CASUs made first contact. We released a group of 40 bees into the arena and let them wander around. The CASUs were programmed to signal bee detections (made by the IR sensors) with the diagnostic LED.

Bee detections trigger CASU LED signals.

Bee detections trigger CASU LED signals.


Organizers: José Halloy, Universite Paris Diderot; Thomas Schmickl, Karl-Franzens-University Graz; Stuart Wilson, University of Sheffield.

Our workshop had approx. 20 people audience, similar to the other satellite workshops at Living Machines 2014, 29th August 2014 in Milano.

by Thomas Schmickl:

Tim Landgraf:

Tim announces to play devil’s advocate and coining arguments against robotic-animal societies as scientific tools. He shows the waggle dance and Karl v. Frisch’s experiments to decipher them. Then he continues with explaining RoboBees details: A plotter machine was used as a basis for building the bee robot. To this machine his team made extensions like heating up the robot, flapping wings and offering sugar water probes to bees. Some nice movies of RoboBees, successes and non-successes were shown. The team used harmonic radar to track the bees’ flights. Tim explains an experiment where they had 2 real rewarding food sources and one virtual food source which never rewarded a bee but which was advertised by RoboBee as a good food source. The bees followed the dances but did not often respond to the food source, they were often flying to one of the not-advertised food source they have been rewarded before (by flying there and drink sugar water there). Tim brings huge list of what is unknown about aspects of the experiments. Many unknown things (variables) in the robot and also many unknown things (motivation, etc.) on the bee side. He calls them, due to their vast amount, likely not explorable in an exhaustive way as their assessment will need specific experimentation for each one of them. In his new upcoming project they plan to observe all dancing interactions in a full colony (who danced with whom and where and how long?), using several high-performance cameras and computer clusters. This way he plans to generate a Facebook of a beehive, or maybe a honeybee version of NSA’s prism program? :-)

Andy Adamatzky:

Presenting Physarum machines, which are basically growing slime moulds, but in his words, they are a biological implementation of Kolmogorov machines (1950s), which were proposed as an alternative to Turing machines. Slime mold grows with a set of oat-flake depots and forms structures close to minimum spanning trees. In the following he describes several setups/manipulations that have to be done on Physarum machines that have to be implemented to allow them to grow into specific types of structures. How to program a Physarum machine? There is not one single answer: The team uses repellents for spreading and merging. In addition, gradients can be used. Also, aromatic substances can be used to guide/control the slime mold growth. He showed several experimental comparisons of continents/counties with turnpike networks and the corresponding network that arises in a slime mold if the oat flakes were placed in similar relative positions to each other compared to the larger cities of the chosen countries. Also 3D-printed models that depict mountains were used and similarity between the real-world traffic network and the slime-mold network can be found. In the following, Andy showed how slime mold can be used for a bio-hybrid sensor hair, exploiting the fact that slime mold changes its oscillation frequency after being touched. Also, a slime mold-based color sensor and a slime mold as chemical sensor is presented. Slime mold can also act as a connection, like a cable. It was finally kept open if it will be possible to rebuild even galaxies or the whole universe (including big bang?) with slime mold :-). A more serious outlook was to introduce the concept of emotions to Physarium machines.


Jose Halloy:

Jose starts with a round-trip on classical ethological experimentation (e.g. Tinbergen’s usage of dummies to trigger behavioral responses). (Of course, he is again omitting Konrad Lorenz on that issue. :-( ) Jose is presenting a wrap-up of the results obtained from his experiments with a mixed society built of mobile robots and cockroaches (Science 2007) and an extension of the model he published in PNAS 2006. Jose extrapolates on the importance of having a behavior-based model to be able to introduce robots into an animal society. This is important to support the goal that the animals should accept the robot among their own society. He describes the difficulty/impossibility to come from a microscopic model to a macroscopic one or the other way around. In a new approach, he fits/tunes parameters of a finite-state machine or a Markov model by using an evolutionary algorithm to alter the model’s parameters until it produces the macroscopic system behavior. Finally, a system was discussed that derives the finite state machine also from empirically observed data to close the loop. This is an important step, as deriving model and parameter values from observed data in an automated way may finally lead to an autonomous and automatic research machinery. Science without scientist? Maybe.

Adam Miklosi:

Adam talks about dogs, approaching the topic first from the historical side on the when and where and why of the domestication of dogs by humans. Then he reports of the swarmix project which combines flying drones and dogs and humans for search & rescue operations. He raises several important questions like “why should a dog follow a flying robot in a cooperative way?”. A series of experiments was shown that used simple radio-controlled cars (“mechanic robots”), radio-controlled cars that also showed weird, non-perfect behaviors (“social robots”) and humans that behaved robot like (“human robots”). It was first found that there are significant differences in the reaction behaviors (like attention) the dogs show towards those agents. Finally, it was found that real humans can interfere with dogs’ food choice behavior while “mechanic robots” cannot and “social robots” can do at least a bit.

Stuart Wilson:

Stuart talks about genes, neurons & synapses first, stressing the fact that self organization allows 10^5 genes can code for 10^11 neurons and 10^14 synapses, and the consequences that this might have for the spatial self-organization of brain systems. He starts with a tutorial on self-organizing cortical maps, then examples for self-organizing sensorimotor maps, and finally self-organizing behavior. He shows the Kohonen SOM (SOM=Self-organizing maps) approach and compares the finally emerging pattern (folded network) to images of virtual cortex, which shows similarities. Then he continues with the LISSOM approach which extends the concept of SOM with long range inhibitory connections and short range excitatory connections. Using this method, which is close to the famous Turing model of morphogenesis (see also Murray for skin/fur patterns or D’arcy Thompson “On Growth and Form”), its is possible to grow cortex models that behave and look very similar to the visual cortex of vertebrates. With mice & rats, this is not found (exception of the rule?) but similar patterning is found in the way how the placing of the whisker hairs in the rat’s face and their direction preference are represented in the rat’s brain. Also here, LISSOM can be used to train/generate comparable maps. He shows an implementation of such LISSOM maps in a moving rodent-like robot that senses its environment through its whiskers. Finally, Stuart shows how rat puppies live in the first 2 weeks of their life after being born, what is the timespan in which those brain nets evolve: They are cuddling together in the nest with gazillions of touch contacts. Finally he shows the thermoregulating aspect of the cuddling and a model that also incorporates the behavior of the pups, which in fact arises also from their neural maps of the environment and their inclusion in the sensorimotor loop. So he suggests a major neuro-behavioral-physical-developmental feedback loop regulating all those aspects. Rat cuddling groups seem a bit similar to the honeybees’ winter cluster, as my (T.S.) final thought.

Thomas Schmickl:

Finally I gave a presentation of the aggregation behavior of young honeybees which manage to find optimal locations in a temperature arena in a decentralized and swarm-intelligent way. I introduced to the first outcomes of our project ASSISIbf, including novel findings on bees’ reactions to light emissions and vibration. In several movies, I showed the first generation prototypes of ASSISIbf honeybee robots (CASUs) in action and in interaction with honeybees. Several videos demonstrated that we can control honeybee aggregations by temperature emissions and also by vibrations which are produced by CASUs. In a final demonstration, first experiments were shown in which we were able to close the loop for the first time: Robots reacted to the local presence of bees and bees reacted in turn to those robots locally. In addition, I showed the derivation of classification algorithms that can be used to predict the local honeybee density around a CASU as well as first steps into using evolutionary computation to generate CASU software controllers in order to modulate the collective behavior of the honeybee society associated with them.

The ASSISIbf project aims to create a new class of bio-hybrid systems by integrating artificial agents capable of autonomous computation, called CASUs (Combined Actuator-Sensor Units), with animal societies (honeybees and fish). The CASUs are expected to become powerful tools that will provide new experimental insights into collective phenomena.

Programability and autonomous computation are key capabilities for making the CASUs useful experimental tools. Researchers with a background in biology or collective systems theory should be able to quickly implement and deploy algorithms for interacting with the animals. Furthermore, because experimenting with animals is time consuming and can have seasonal constraints (e.g. experimenting with honeybees is only possible between May and September), simulation tools are necessary to facilitate algorithm development. Transitioning from simulation to actual hardware should be done seamlessly, without requiring changes to user code.

Overview of software components.

Figure 1: Overview of software components.

To meet the above requirements, we have developed the software infrastructure shown in Figure 1. The key software component is the middleware, built on top of the ZeroMQ networking library. Google’s Protocol buffers are used for message serialization. It provides a flexible and efficient message-based communication system, which decouples user code from low-level device details. This decoupling is the key ingredient for enabling a seamless transition between simulation and hardware. On the user side, the Python programming language has been chosen as it is well suited to a fast development cycle and has an extensive ecosystem of available libraries.

Initial experiments have confirmed the flexiblity of the software design. We were able to test CASU control algorithms in simulation and afterwards deploy the same code to the CASU hardware, without any modification. The observed experimental behavior qualitatively matched the simulation results.



Introducing the Fish Robots

Categories: Blog
Comments Off

One of the goals of the ASSISIbf project is to create lures that can interact with zebrafish in an aquarium. These lures are ready now and are tested in 1m x 1m aquariums with 10 cm of water. The lure consist of a passive device placed in the water and moved by a robot placed under the aquarium. In respect to other similar devices, our robot is very compact. The length of the robot is 43 mm, the width 22 mm and the height 67 mm. The mass of the mobile robot is 80 grams. Moreover the robot does not need batteries, getting continuous power supply by brushes used to retrieve the power from conductive plates above and under the robot. This allows to run long time experiments with multiple robots. Preliminary experiments with zebrafish will be presented at the Robio 2014 conference in Bali, Indonesia.


Fish lure with passive device within and moving robot under the aquarium.

The interests of groups and the individuals in those groups are frequently in tension, whether in collective decision making or in social evolution (e.g. the evolution of cooperation).  Understanding how nature resolves such conflicts can be challenging and raises involved questions.  For instance: what population structures are sufficient to promote cooperative behaviour amongst what are inherently selfish individuals [1, 2]? How might those population structures themselves evolve as a consequence (and cause) of the cooperative traits [3]?  And what circumstances provide benefits for group living [4]?

Sasaki et al [5] investigate one set of behaviours that address this last question: they study how social ants make decisions about the quality of nest sites, examining the conditions under which a group is able to make better decisions than individuals.

Temnothorax ants, individually paint marked inside a laboratory nest (

Temnothorax ants, individually paint marked inside a laboratory nest (

In their experiments, they present the ants with two potential nests that vary in quality — in light levels, in this case (darker nests are preferred). One option is kept constant, while the alternative nest becomes progressively worse, which makes deciding between them easier and easier.  They challenge either a single ant or a colony (of 20–250 ants) to decide between the nests. In keeping with prior work on collective decision making, they find that the collectives are able to make better decisions than individual ants faced with the same task — but only under some conditions.  Specifically, when the nests are similar in quality and the decision is difficult, groups outperform individuals.  For an easy decision individual ants do better.

The authors suggest that this surprising result comes down to the speed at which a colony can make a decision in comparison to a sole ant.  Individuals of a colony need not visit more than one site (because of the competitive recruitment process) and so a colony can effectively make decisions in parallel.  Conversely, a sole individual needs to make many visits to each site before being able to accurately compare, and they (appear to) make a decision before this process reliably splits the symmetry for sites of similar quality.

On the other hand, individuals are able to rapidly assess easy decisions, and need not pay any cost for aggregating social information, in contrast to the colony.  In short: a decision that does not need a population will be made more efficiently by an individual.

A previous study by the same team showed that difficulty could be presented to collectives of ants in an alternative manner [6]: by presenting the organisms with a small (2) or large (8) number of possible nests.  Here, both individual ants and colonies were able to make good decisions in the small case but only collectives were able to maintain good quality decision making when presented with more options.

In ASSISIbf we are interested in learning how we can modify the overall decisions made by animal groups via the interaction of robots and those groups.  If we are to understand how mixed societies may be most productive, it is important that we understand when group-level decisions are likely to be successful. Sasaki’s studies [5,6] show us that collectives might not always be able to make the best decisions (see also Kao & Couzin [7]).  Interestingly, in our mixed societies, the robots have the potential to overcome poor group-level decisions by shaping the information available to the animals.  Pinpointing this kind of “weakness” may yield a substantial impact on overall outcomes using relatively little resource.


Find out more on Takao Sasaki’s research, Stephen Pratt’s lab, and see the ants in action.

[1] Nowak, M. A., & May, R. M. (1992). Evolutionary games and spatial chaos. Nature, 359(6398), 826-829.

[2] Santos, F. C., Pacheco, J. M., & Lenaerts, T. (2006). Evolutionary dynamics of social dilemmas in structured heterogeneous populations. Proceedings of the National Academy of Sciences of the United States of America, 103(9), 3490-3494.

[3] Powers, S. T., Penn, A. S., & Watson, R. A. (2011). The concurrent evolution of cooperation and the population structures that support it. Evolution, 65(6), 1527-1543.

[4] Calcott, B. (2008). The other cooperation problem: generating benefit.  Biology & Philosophy, 23(2), 179-203.

[5] Sasaki, T., Granovskiy, B., Mann, R. P., Sumpter, D. J., & Pratt, S. C. (2013). Ant colonies outperform individuals when a sensory discrimination task is difficult but not when it is easy. Proceedings of the National Academy of Sciences, 110(34), 13769-13773.

[6] Sasaki, T., & Pratt, S. C. (2012). Groups have a larger cognitive capacity than individuals. Current Biology, 22(19), R827-R829.

[7] Kao, A. B., & Couzin, I. D. (2014). Decision accuracy in complex environments is often maximized by small group sizes. Proceedings of the Royal Society B: Biological Sciences, 281(1784), 20133305.

The teams spent the last two days of the training working hard to complete experiments and prepare materials for the upcoming Review Meeting.
The guys from Paris7, supported by Frank Bonnet of EPFL, learned how to program a fish robot using ASEBA IDE. They implemented developed algorithms for the control of a fish-CASU. They joined fish-CASU and real fish in a water tank and ran some nice experiments.

Water tank with fish-CASU and real fish

Water tank with fish-CASU and real fish

Thomas Schmickl and Ronald Thenius invested their time in exploring the capabilities of software and hardware developed by UNIZG. Eventually, several scenarios were conducted, both in simulation and on real hardware. In the real experimental setup small robots, called Bristelbots (HEXBUG Nano), were used instead of real bees.

"Bee" arena for experiments with bee-CASUs

“Bee” arena for experiments with bee-CASUs

Payam Zahadat made her best to implement some cool algorithms on the Thymio-II robots. Her robots were improving from day to day both visually and functionally. Marcelo Elias de Oliveira from EPFL gave her a support by implementing algorithms for visual tracking of robots, thus providing a pose information of each Thymio robot. Again, several experiments were performed using the developed setup.

Final arena for experimenting with Thymio-II robots

Final arena for experimenting with Thymio-II robots

Luis Correia and Fernando Silva collaborated with Damjan Miklić to implement a heat stimuli in the existing simulator. They implemented a model for heat dissipation and superposition in 2D plane. They ran many simulations to verify their model.

All the experiments were recorded and will be prepared for the review meeting held in Paris next month. We can surely expect some very nice videos.