The functioning of an ordinary computer program is quite different than a naturally grown human brain. It does not use clear logical coded instructions instead it is a network of communicating cells. Problems difficult to break down into logical operations can be solved by replicating such networks on the computer.
A new methodology—that styles the time evolution of the nerve signals in an absolutely different way—for encoding such neural networks has been developed at TU Wien (Vienna) in alliance with scientists at Massachusetts Institute of Technology (MIT). The source of inspiration behind this approach is a simple, well researched roundworm C. elegans.
The neural circuits from the nervous system of this tiny creature were replicated on the computer and then the model was modified with machine learning algorithms. With this methodology solving any difficult task became relatively easier with an exceptionally low number of reproduced cells—for instance, parking a car. Albeit the worm-inspired network comprises of barely 12 neurons, it can be trained to guide a rover robot to a given location.
These new neural networks are extremely versatile in nature. Another plus point is that it is easy to understand the internal dynamics as compared to the convention AI neural networks often called as “black box.”
Sending Signals in Branched Networks
Ramin Hasani from the Institute of Computer Engineering at TU Wien said, “Neural networks have to be trained. You provide a specific input and adjust the connections between the neurons so that the desired output is delivered.”
Radu Grosu from the Institute of Computer Engineering of TU Wien also added, “The input, for example, can be a photograph, and the output can be the name of the person in the picture. “Time usually does no play an important role in this process.” Mostly for the neural networks, all the data is delivered at once, resulting in an immediate and certain output. However, things in nature have their own way.
For instance, sequences of movements reacting to a changing environment, speech recognition and translations are always time-dependent. Hasani further added, “This is an architecture that can capture sequences because it makes neurons remember what happened previously.”
Based on biophysical neuron and synapse prototype allowing time-varying dynamics, Hasani and his colleagues have proposed a new RNN architecture. “In a standard RNN-model, there is a constant link between neuron one and neuron two, defining how strongly the activity of neuron one influences the activity of neuron two. In our novel RNN architecture, this link is a nonlinear function of time,” said Ramin Hasani.
Parking Car with A Worm Brain
Allowing links between cells and cell activities to vary over the period, exposes absolutely new possibilities. Mathias Lechner, Ramin Hasani and their associates theoretically showed principally their architecture can be similar to arbitrary dynamics.
Demonstrating the flexibility of the new methodology they trained and developed a miniature neural network: “We re-purposed a neural circuit from the nervous system of the nematode C. elegans. It is responsible for generating a simple reflexive behavior – the touch-withdrawal. This neural network was simulated and trained to control real-life applications,” said Mathias Lechner who currently works for the Institute of Science and Technology (IST) Austria.
Noting the remarkable success Hasani said, “The output of the neural network, which in nature would control the movement of nematode worms, is used in our case to steer and accelerate a vehicle”, says Hasani. “We theoretically and experimentally demonstrated that our novel neural networks can solve complex tasks in real-life and in simulated physical environments.”
Obviously, by any chance, this does not indicate that artificial worms will park the car in the future, but this innovation for sure shows that AI when incorporated with more brain-like structural design can be immeasurably.