Using deep learning models for learning semantic text similarity of arabic questions

Mahmoud Hammad, Mohammed Al-Smadi, Qanita Bani Baker, Sa’ad A. Al-Zboon

Abstract


Question-answering platforms serve millions of users seeking knowledge and solutions for their daily life problems. However, many knowledge seekers are facing the challenge to find the right answer among similar answered questions and writers re- sponding to asked questions feel like they need to repeat answers many times for sim- ilar questions. This research aims at tackling the problem of learning the semantic text similarity among different asked questions by using deep learning. Three models are implemented to address the aforementioned problem: (a) a supervised-machine learning model using XGBoost trained with pre-defined features, (b) an adapted Siamese- based deep learning recurrent architecture trained with pre-defined features, and (c) a Pre-trained deep bidirectional transformer based on BERT model. Proposed models were evaluated using a reference Arabic dataset from the mawdoo3.com company. Evaluation results show that the BERT-based model outperforms the other two models with an F1 =92.99%, whereas the Siamese-based model comes in the second place with F1 = 89.048%, and finally, the XGBoost as a baseline model achieved the lowest result of F1= 86.086%.

Keywords


arabic dataset; BERT; deep learning; machine learning; semantic text similarity;



DOI: http://doi.org/10.11591/ijece.v11i4.pp%25p
Total views : 0 times


Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

ISSN 2088-8708, e-ISSN 2722-2578