Understanding AI: How studying Neural Net and Human Brain Helps

Understanding AI: How studying Neural Net and Human Brain Helps

New study on behavioral analysis using artificial intelligence aims to understand how animal brain works

Artificial intelligence is a common buzz word these days. Surprisingly, most of the users are not aware how it’s interwoven in their everyday lives. From gleaning data using machine learning tools to biometrics, AI continues to be a huge part of our ‘digital life’. It can power entire integrated ecosystems of devices to learn about users, their environment, and preferences, and then adjust accordingly to provide the optimal response or action. It achieves this by carrying behavioral analysis on the users.

Since many decades, scientists have been intrigued by the human brain and how it functions. They tried replicating the same in neural networks which attempt to identify underlying relationships in a dataset. As we move further into the 21st century, we will be working alongside products built on neural networks. And for this it is important to understand how these artificial neural network functions and make any decision. This is crucial since, neural networks form the framework that enables several machine learning algorithms to work together to process very complex operations. Some of the key problems include classification, regression, function estimation and dimensionality reduction.

To get a better and clearer understanding of the “mapping” of neural network architecture, it is integral to discern how brain of a living being, functions in certain situations and responds to external stimulus (data). This approach looks sensible if we draw common similarities of both (human) brain and neural network. For instance, like human brain, neural networks can learn from their own mistakes.

One of the interesting features of animal behavior is that it can be described as the neuronal-driven sequence of reoccurring postures through time. Just like artificial neural network, scientists find it challenging to study what happens in the neuronal networks during particular behaviors. Recently, researchers at the University of Bonn have presented a new method that can help address this.

They have presented their work in the paper, “DeepLabStream enables closed-loop behavioral experiments using deep learning-based markerless, real-time posture detection” published in Nature. The study has also been published in the journal Communications Biology.

The paper mentions that most of the available current technologies focus on offline pose estimation with high spatiotemporal resolution. However, to correlate behavior with neuronal activity it is often necessary to detect and react online to behavioral expressions. So, the researchers have developed DeepLabStream (DLStream), a multi-purpose software solution that enables markerless, real-time tracking, and neuronal manipulation of freely moving animals during ongoing experiments. In simpler terms, DeepLabSTream, uses artificial intelligence to estimate the posture and behavioral expressions of mice in real time and can respond to them immediately.

To test the capabilities of DeepLabSTream software, the researchers conducted a classic, multilayered, freely moving conditioning task, as well as a head direction-dependent optogenetic stimulation experiment using a neuronal activity-dependent, light-induced labeling system (Cal-Light).  In the first experiment the mice were trained to move to a corner of the box within a set period of time when viewing a specific image to collect a reward. Simultaneously, cameras were used to capture their movements, while the system automatically created logs of the movements.

And in the second experiment, they employed Cal-Light proteins system for labeling corresponding neuronal networks, in a head tilt. These proteins were only activated in the brain at specific head inclinations when exposed to light and color-coded the underlying neuronal networks. So, in case, the corresponding head-tilt-dependent neuronal networks were not activated, a corresponding marker failed to appear, which clarified the causal relationship.

An article in Phys.org reports that until now, the inability of accurate detection of complex behavioral episodes in real time, posed as a limiting factor in the applicability of Cal-Light. However, the DeepLabSTream software now has a temporal resolution in the millisecond range.

According to Jens Schweihoff, one of the study co-authors at University Hospital Bonn, “Since DLStream enables real-time behavior detection, the available range of applications for the Cal-Light method is significantly expanded, which makes it possible to conduct automated, behavior-dependent manipulation during ongoing experiments.”

The researchers believe that using this combination of methods, they can now better study the causal relationships between behavior and the underlying neuronal networks. They also observed that there weren’t any obvious limitations to the applicability of DeepLabSTream on different organisms and other experimental paradigms.

Join our WhatsApp and Telegram Community to Get Regular Top Tech Updates
Whatsapp Icon
Telegram Icon

Disclaimer: Any financial and crypto market information given on Analytics Insight are sponsored articles, written for informational purpose only and is not an investment advice. The readers are further advised that Crypto products and NFTs are unregulated and can be highly risky. There may be no regulatory recourse for any loss from such transactions. Conduct your own research by contacting financial experts before making any investment decisions. The decision to read hereinafter is purely a matter of choice and shall be construed as an express undertaking/guarantee in favour of Analytics Insight of being absolved from any/ all potential legal action, or enforceable claims. We do not represent nor own any cryptocurrency, any complaints, abuse or concerns with regards to the information provided shall be immediately informed here.

Close