Functional magnetic resonance imaging-based brain decoding with visual semantic model

Piyawat Saengpetch, Luepol Pipanmemekaporn, Suwatchai Kamolsantiroj


The activity pattern of the brain has been activated to identify a person in mind. Using the function magnetic resonance imaging (fMRI) to decipher brain decoding is the most accepted method. However, the accuracy of fMRI-based brain decoder is still restricted due to limited training samples. The limitations of the brain decoder using fMRI are passed through the design features proposed for many label coding and model training to predict these characteristics for a particular label. Moreover, what kind of semantic features for deciphering the neurological activity patterns are unclear. In current work, a new calculation model for learning decoding labels that is consistent with fMRI activity responses. The approach demonstrates the proposed corresponding label's success in terms of accuracy, which is decoded from brain activity patterns and compared with conventional text-derived feature technique. Besides, experimental studies present a training model based on multi-tasking to reduce the problems of limited training data sets. Therefore, the multi-task learning model is more efficient than modern methods of calculation, and decoding features may be easily obtained.


Brain decoding; Deep learning; fMRI activity patterns; Multi-task learning; Visual semantic model

Full Text:



Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

International Journal of Electrical and Computer Engineering (IJECE)
p-ISSN 2088-8708, e-ISSN 2722-2578