Explainable extreme boosting model for breast cancer diagnosis

Tamilarasi Suresh, Tsehay Admassu Assegie, Sangeetha Ganesan, Ravulapalli Lakshmi Tulasi, Radha Mothukuri, Ayodeji Olalekan Salau

Abstract


This study investigates the Shapley additive explanation (SHAP) of the extreme boosting (XGBoost) model for breast cancer diagnosis. The study employed Wisconsin’s breast cancer dataset, characterized by 30 features extracted from an image of a breast cell. SHAP module generated different explainer values representing the impact of a breast cancer feature on breast cancer diagnosis. The experiment computed SHAP values of 569 samples of the breast cancer dataset. The SHAP explanation indicates perimeter and concave points have the highest impact on breast cancer diagnosis. SHAP explains the XGB model diagnosis outcome showing the features affecting the XGBoost model. The developed XGB model achieves an accuracy of 98.42%.

Keywords


black-box model; breast cancer prediction; interpretable model; machine learning; model transparency;

Full Text:

PDF


DOI: http://doi.org/10.11591/ijece.v13i5.pp5764-5769

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

International Journal of Electrical and Computer Engineering (IJECE)
p-ISSN 2088-8708, e-ISSN 2722-2578

This journal is published by the Institute of Advanced Engineering and Science (IAES) in collaboration with Intelektual Pustaka Media Utama (IPMU).