How robots can orient themselves in an energy-efficient way

For robots to be able to move autonomously in space, they must be able to estimate where they are and how they are moving. Until now, this was only possible with a great deal of computing power and energy. A research team including ZHAW researcher Yulia Sandamirskaya has now developed a new type of energy-efficient solution and demonstrated its applicability to a real robot task. The results have been published in the renowned journal Nature Machine Intelligence.

(Image: ZHAW - LSFM)

Even small animals such as bees, which have fewer than one million neurons, can easily find their way around complex environments. They use visual signals to estimate their own movement and track their position in relation to important locations. This ability is called visual odometry. In terms of compactness and energy efficiency, the solution that animals use is unmatched by the best current technical solutions for robots. However, for new applications such as small autonomous drones or lightweight augmented reality glasses to become possible, energy efficiency must be massively improved. 

A neuromorphic, event-based camera was mounted on the robot arm. Similar to our eyes, it records changes in the scene as "events". These are encoded with activation vectors and sent to a neural resonator. There, a short-term memory of the visual scene is built up, the objects in the scene are tracked and the movement of the camera is calculated. In this way, the system generates a representation that is transparent, i.e. comprehensible to a human and relevant to the task at hand. (Image: Nature Machine Intelligence)

Built according to the example of natural neural networks

In the work published in Nature Machine Intelligence, an international team of authors propose a new neuromorphic solution, modeled on natural neural networks, which can also be efficiently implemented in neuromorphic hardware. The results presented represent an important step towards the use of neuromorphic computer hardware for fast and energy-efficient visual odometry and the associated task of simultaneous localization and mapping. The researchers have experimentally validated this approach in a simple robotic task and were able to show with an event-based dataset that the performance is state of the art.

More transparency in AI

The big difference to today's AI, such as that used by ChatGTP, is that the method described can put together or take apart various components of a visual scene to create a composition. The components include information such as "Which objects are in it?" or "Where are they located?" and many more. In conventional neural networks, these components are mixed together and cannot be disentangled. The new method can. This is crucial for developing modular and, above all, transparent AI systems. 

Source: ZHAW - LSFM  

(Visited 57 times, 1 visits today)

More articles on the topic