Andrew Straw discusses the development and importance of a new VR technology that enables animal behavior to be studied without animals being restrained.
Andrew Straw is a professor in the faculty of biology at the University of Freiburg (Germany). His lab performs basic research on the visual behavior and neural circuits of genetic model organisms, especially the fly Drosophila. As a complement to the scientific research, his lab also develops novel technologies at the intersection of biology, computing and imaging. For example, in addition to virtual reality (VR) technology, they develop multi-camera, markerless 3D animal tracking systems and tools for manipulating neural circuit activity in freely behaving animals using optogenetics.
Born in Albuquerque (NM, USA), Straw completed his BSc at the University of Southern California (CA, USA), and completed his PhD at the University of Adelaide (SA, Australia). Prior to his present position, he was a researcher at Caltech (CA, USA) and the Research Institute of Molecular Pathology (Vienna, Austria). He was awarded an ERC Starting Grant to study fly visual circuits and enjoys hiking.
Could you please give a general overview of what the research entailed?
It starts from the challenge we overcame. A lot of researchers have used VR with animals that have been restrained. The animals have typically been held under a microscope in a way that they couldn’t move at all or maybe they could only move their legs while on a treadmill. Then the researcher would put them inside of a computer program, which, based on the movement of their appendages, would simulate their movement through the world.
However, animals are really sensitive to their own movement, especially if you want to study things like special awareness or how an animal knows where it is. So, it can be really important that an animal not only has the visual appearance of being in a certain location but also has all of that sensory feedback. When it has walked forward or when it has turned, every sense in its body tells it that it has walked forward and it has turned.
We made VR for freely moving animals. Technically, there are two parts to it. We track the animal with low latency and then we use computer graphics. From that animal’s position, we draw scenes on the walls of the arena it is within that we want it to see. It’s a lot like the holodeck in Star Trek actually; that’s a really good example.
How did you go about building the FreemoVR system?
Well 10 years ago I was a post doc in the lab of Michael Dickinson at Caltech (CA, USA). I spent years developing a multi camera markerless fly tracking system that works in real time so we know exactly where the fly is within milliseconds. That was step one.
Then, when I moved to my own lab in Vienna, I took these tracking data and started building a VR arena for animals using computer graphics games engines. I was specifically focused on flies, because that’s what my lab mainly studies, but I realized it could be general for a lot of these other small animals that are really important in basic research, so I worked with colleagues to get it working in zebrafish and mice too.
What are the key implications of the research?
Right now it’s a method that enables a lot of new kinds of experiments that haven’t been possible before. In particular, the ability to study questions about navigation. When an animal moves all of its senses are very important for it to know where it is and for it to remember how to get home again or how to find food.
The other big one is the collective behavior of animals. Interaction between animals is the critical variable. If all you can do is observe multiple animals at once, you rely on natural experiments to happen. These may not happen at the frequency or efficiency that you would like. With VR, you can put a real animal in the arena with simulated other animals and then measure that one animal’s response and change the interactivity between the real animal and the virtual animal, measuring how a real animal responds to different types of interactivity between animals.
It’s a great tool for studying spatial navigation and social and collective behavior in animals.
So what are you currently working on? Are you going to develop this further?
Several different things – one thing we are working on is imaging the activity of the brain of an animal when it’s freely moving, specifically flies and fish. When they are flying or swimming (in this case we will just go with walking flies to start), we want to be able to record the activity of neurons in the brain even when it’s in this VR world. So there’s more development in technology in that side but now there is also the science.
For example, we are studying the neural circuits behind how a fly knows where it is. There is a phenomenon called path integration that, in insects, is best known in desert ants. These ants leave their nest and walk for tens of meters to find food. Their walking pathway out from the nest is this wandering search path, but unlike most ants in Europe that leave a chemical trail, this is the Sahara desert where it’s too hot to leave chemicals because they just immediately evaporate away.
They somehow remember their way home and we know that because the moment they get the food they go straight home again. This has been demonstrated very clearly in a desert ant called Cataglyphis. It has recently been demonstrated that Drosophila, the fruit fly, also use a form of path integration. We are interested in using VR to test if flies also use vision to find home again, for example by remembering landmarks. A lot of ants and bees do this.
Do you think that the future of animal behavior research could be in this VR system?
Yes I think so; it’s a really exciting technique for a lot of people. I get a lot of interest about it. One of the challenges is to make it easy enough to use, quick enough to set up and get going that anybody who is interested doesn’t have to spend 10 years developing it like I did.
I’ve been really active in the open source software community but it’s difficult to get support for developing these kinds of things in an open way. It’s a real challenge to make this stuff accessible to any lab that wants to use it.
Ultimately, I think these techniques will be successful. Not necessarily with our specific software but I think people see that this is a really powerful technique and they would like to use it to answer a lot of questions.
There are many animal researchers who have come to associate the term VR with restrained animal VR. I hear a lot of criticism; people say VR experiments are terrible but what they are actually saying is that restrained animal VR experiments are terrible. I think the ability to have the animal freely moving, in the context of this multi-sensory feedback, is really important. That’s the environment and situation the animals have developed in.
It may not be obvious, but an animal exists in a closed loop, which means what it does influences what it experiences which then influences what it does. This cyclical nature, this action-perception cycle is fundamental to every animal. But, when you restrain an animal you break that loop. Animal brains have developed under these conditions of being in this closed loop, so if we really want to understand brain function, then we need to understand it in the context of freely moving animals.
Do you think we could get to a point where we would be able to build a VR model of an animal that replicates the real animal exactly?
My question with modeling is ‘what is the goal of the model?’ Modeling is very useful for a lot of targeted questions. For example, my colleague Emily Baird at Lund University (Sweden) wants to understand how different species of bees have different eye designs that are very diverse and how they behave differently. There is a whole interaction between the habitat they live in, the morphology of their eye and also their behavioral repertoire. I think being able to do VR experiments on these bees is a natural experimental design that will open up a lot of avenues.