Bioinspired eye melds nature and machine to transform robotics


Original story from The University of North Carolina at Chapel Hill (NC, USA).

A bioinspired robotic eye can automatically adjust its pupil size in response to changing light levels, enabling it to recognize objects even in unevenly lit or overexposed environments.

Scientists have long admired how animal eyes adapt so smoothly to their surroundings. A human pupil shrinks in bright sunlight and opens wide in the dark. A cat’s vertical slit pupil helps it hunt at night. Sheep have wide, horizontal pupils that let them scan the horizon for danger. These natural designs are the result of millions of years of evolution, and they help animals survive in very different environments.

In a paper published in Science Robotics, researchers in the Department of Applied Physical Sciences at The University of North Carolina at Chapel Hill (NC, USA) describe an artificial vision system that borrows these ideas from nature and brings them into machines. The study, ‘Bio-inspired Adaptive Pupil Reflex based on Liquid-Metal Shapeshifters for Machine Vision’, shows how a fabricated ‘eye’ can automatically change its pupil shape and size in response to light, much like a living eye does. The work combines ideas from biology, engineering and computing, but the core goal is simple: help machines see better when lighting conditions change suddenly or dramatically.

“Biological eyes don’t just take pictures; they actively adjust themselves to protect vision and improve clarity,” commented Kun Liang, lead author of the study and a postdoctoral fellow in applied physical sciences. “Our goal was to build an artificial vision system that doesn’t rely only on software to fix images afterward, but instead adapts physically in real time, the way an eye does.”

At the heart of the system is an artificial pupil made from liquid metal. Unlike solid materials, this liquid metal can change its shape when electrical signals are applied. In bright light, the liquid metal spreads out to block part of the opening, reducing the amount of light that enters the system. In dim light, it pulls back, letting in more light. This behavior mimics the pupil reflex in human and animal eyes.

Behind this pupil sits an artificial ‘retina’. Instead of a flat sensor like the one in most cameras, the researchers built a curved, dome-shaped array of light sensors. This shape is closer to that of a real eye and allows for a much wider field of view at about 108 degrees. In practical terms, that means the system can see more of its surroundings at once, without needing multiple cameras or complex optics.

When light hits the artificial retina, it generates electrical signals that reflect the brightness of the environment. These signals are then sent to liquid-metal components that act like simple artificial neurons. Just as nerve cells in the body send electrical spikes, these components produce pulse-like signals that control how the liquid-metal pupil moves. The result is a closed-loop system: light comes in, the system reacts and the amount of incoming light is adjusted automatically.

“This kind of feedback loop is essential in biology, and it’s something most machine vision systems don’t truly have,” continued Liang. “By integrating sensing, decision-making and actuation into one system, we’re closer to how real eyes work.”

One striking feature of the system is its ability to copy not just human pupils, but the pupils of other animals. By controlling different sections of the liquid metal independently, the researchers can form round pupils, vertical slits like a cat’s, horizontal shapes like a sheep’s or even more unusual forms seen in animals such as cuttlefish. Each shape changes how light enters and how images are formed, offering different visual advantages.

This flexibility could be especially useful in robotics and autonomous machines. For example, a robot designed to navigate open landscapes might benefit from a wide, horizontal pupil that emphasizes the horizon. A machine built for precision tasks could use a narrow pupil shape that improves depth and focus.


AI for an eye: AI speeds up retinal imaging

A recently developed AI tool can be integrated into a common retinal imaging technique to produce high-resolution images in a shorter amount of time.


To test how well the system works, the researchers challenged it with harsh lighting conditions that normally confuse cameras. In very bright light, images can become washed out, with important details lost. When the artificial pupil reflex was activated, the system reduced incoming light before the image was processed. This physical adjustment improved image clarity and made it easier for computer programs to recognize objects, such as numbers or vehicles.

In one experiment, the team showed that image recognition accuracy improved significantly after the adaptive pupil filtered out excess light. Instead of relying on heavy computing power to clean up overexposed images, the system handled part of the problem at the hardware level.

“This research shows the power of letting the hardware do some of the thinking,” noted Wubin Bai, senior author of the paper and an assistant professor in The University of North Carolina Department of Applied Physical Sciences. “By building adaptability directly into the vision system, we can reduce complexity, save energy and improve reliability in real-world environments.”

The implications extend beyond laboratory demonstrations. Modern technologies, such as autonomous vehicles, drones and industrial robots, often struggle with sudden changes in light, like driving into a tunnel or facing direct sunlight. An adaptive vision system inspired by biological eyes could help these machines respond more quickly and safely.

While the current prototype is not yet fast enough for high-speed driving applications, the researchers see clear paths for improvement. Smaller components, refined designs and faster responses could make future versions practical for everyday use. The work also highlights a shift in how engineers think about machine vision. Instead of treating cameras as passive tools that simply capture images, this research points toward systems that actively interact with their environment, just as living eyes do.

“Nature has already solved many of the problems we face in engineering,” added Bai. “By studying and reimagining these solutions, we can build machines that see the world in smarter, more resilient ways.”


This article has been republished from the following materials. Material may have been edited for length and house style. For further information, please contact the cited source. Our press release publishing policy can be accessed here.


  You might also be interested in...