Language model optimization for mental health question answering application

Fardan Zamakhsyari, Agung Fatwanto

Abstract


Question answering (QA) is a task in natural language processing (NLP) where the bidirectional encoder representations from transformers (BERT) language model has shown remarkable results. This research focuses on optimizing the IndoBERT and MBERT models for the QA task in the mental health domain, using a translated version of the Amod/mental_health_counseling_conversations dataset on Hugging Face. The optimization process involves fine-tuning IndoBERT and MBERT to enhance their performance, evaluated using BERTScore components: F1, recall, and precision. The results indicate that fine-tuning significantly boosts IndoBERT’s performance, achieving an F1-BERTScore of 91.8%, a recall of 89.9%, and precision of 93.9%, marking a 28% improvement. For the model, M-BERT’s fine-tuning results include an F1-BERTScore of 79.2%, recall of 73.4%, and precision of 86.2%, with only a 5% improvement. These findings underscore the importance of fine-tuning and using language-specific models like IndoBERT for specialized NLP tasks, demonstrating the potential to create more accurate and contextually relevant question-answering systems in the mental health domain.

Keywords


Bidirectional encoder representations from transformers; IndoBERT; MBERT; Natural language processing; Question answer

Full Text:

PDF


DOI: http://doi.org/10.11591/ijece.v15i5.pp4829-4836

Copyright (c) 2025 Fardan Zamakhsyari, Agung Fatwanto

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

International Journal of Electrical and Computer Engineering (IJECE)
p-ISSN 2088-8708, e-ISSN 2722-2578

This journal is published by the Institute of Advanced Engineering and Science (IAES).