summary: Researchers have developed an AI algorithm that can predict mouse movements with 95% accuracy by analyzing functional imaging data across the cortex, potentially revolutionizing brain-machine interface technology. The team's end-to-end deep learning method requires no data preprocessing and can make accurate predictions based on just 0.17 seconds of image data.
Additionally, we devised a method to identify which parts of the data are important for predictions, providing a glimpse into the AI's decision-making process. This advance not only improves our understanding of neural decoding, but also paves the way for the development of non-invasive, near real-time brain-machine interfaces.
Important facts:
- High prediction accuracy: The AI model can accurately predict the mouse's behavioral state (moving or stationary) based on brain imaging data with a 95% success rate, without the need for denoising or predefined regions of interest.
- Fast and personalized predictions: The model's ability to generate predictions from 0.17 seconds of data and its effectiveness across a variety of mice demonstrates the potential for personalized, near real-time applications in brain-machine interfaces.
- Open the AI black box: By identifying cortical regions important for behavioral classification, researchers provided valuable insight into the data that informs AI decisions and increased the interpretability of deep learning in neuroscience.
sauce: Kobe University
AI image recognition algorithms can predict whether the mouse is moving based on functional brain imaging data. Researchers at Kobe University also developed a method to identify which input data is relevant, shedding light on an AI black box that could contribute to brain-machine interface technology.
Creating brain-machine interfaces requires understanding how brain signals and the affected behaviors relate to each other. This is called “neural decoding,” and most of the research in this area is done on the electrical activity of brain cells, as measured by electrodes implanted in the brain.
On the other hand, functional imaging techniques such as fMRI and calcium imaging monitor the entire brain and can visualize active brain regions through proxy data. Of the two, calcium imaging is faster and provides better spatial resolution. However, these data sources remain underutilized in neural decoding efforts.
One particular obstacle is the need for data preprocessing, such as by removing noise or identifying regions of interest, making it difficult to devise generalized procedures for neurally decoding many different types of behavior. It will be difficult.
Takehiro Ajioka, a medical student at Kobe University, tapped into the interdisciplinary expertise of a team led by neuroscientist Toru Takumi to tackle this problem.
“Our experience with VR-based real-time imaging and motion tracking systems for mice and deep learning techniques allows us to explore 'end-to-end' deep learning techniques that do not require pre-processing or pre-specified features. I did. Therefore, we evaluate information across the cortex for neural decoding,” says Ajioka.
They combined two different deep learning algorithms, one for spatial patterns and one for temporal patterns, with film data across the cortex of a mouse resting or running on a treadmill, and used it to determine whether the mouse was moving or not. We trained an AI model to accurately predict from image data. Or resting.
in diary PLoS Computational Biologyresearchers at Kobe University report that the model is 95% accurate in predicting the animal's true behavioral state, without the need to remove noise or predefine regions of interest.
Moreover, their model made these accurate predictions based on just 0.17 seconds of data. This means that near real-time speeds can be achieved. This also works across her five different individuals, showing that the model can filter out individual characteristics.
The neuroscientists then determined which parts of the image data were primarily involved in the predictions by removing parts of the data and observing how the model performed in those conditions. The worse the prediction, the more important the data is.
“This model's ability to identify cortical regions important for behavioral classification is particularly exciting because it opens the lid on the 'black box' aspect of deep learning techniques,” explains Ajioka.
In summary, the Kobe University team established a generalizable method to identify behavioral states from functional imaging data across the cortex and developed a method to identify which parts of the data the predictions are based on. Ajioka explains why this is relevant.
“This study establishes the foundation for further development of brain-machine interfaces capable of near real-time behavioral decoding using non-invasive brain imaging.”
Funding: This research was supported by the Japan Society for the Promotion of Science (grants JP16H06316, JP23H04233, JP23KK0132, JP19K16886, JP23K14673, JP23H04138), the Japan Agency for Medical Research and Development (grant JP21wm0425011), and the Japan Science and Technology Agency (grant JPMJMS2299). It was done. and JPMJMS229B), National Center of Neurology and Psychiatry (Grant 30-9), and Takeda Science Foundation. This was carried out in collaboration with researchers from the ATR Neuroinformatics Laboratory.
About this AI and exercise research news
author: Daniel Shenz
sauce: Kobe University
contact: Daniel Shenz – Kobe University
image: Image credited to Neuroscience News
Original research: Open access.
“An end-to-end deep learning approach to mouse behavioral classification from whole-cortex calcium imaging” Takumi Toru et al. PLOS Computational Biology
abstract
An end-to-end deep learning approach to classify mouse behavior from whole-cortex calcium imaging
Deep learning is a powerful tool for neural decoding and has been widely applied in systems neuroscience and clinical research.
Interpretable and transparent models that can account for the neural decoding of intended movements are important for identifying important functions of deep learning decoders in brain activity. In this study, we examine the performance of deep learning for classifying mouse behavioral states from mesoscopic cortex-wide calcium imaging data.
Our decoder, which combines a convolutional neural network (CNN)-based end-to-end decoder and a recurrent neural network (RNN), classifies behavioral states with high accuracy and robustness to individual differences on subsecond time scales. Using a CNN-RNN decoder, we identified that forelimb and hindlimb regions of the somatosensory cortex significantly contribute to behavioral classification.
Our findings suggest that the end-to-end approach has the potential to be an interpretable deep learning method that unbiasedly visualizes important regions of the brain.