Braille code classifications tool based on computer vision for visual impaired

Hany M. Sadak, Ashraf A. M. Khalaf, Aziza I. Hussein, Gerges Mansour Salama

Abstract


Blind and visually impaired people (VIP) face many challenges in writing as they usually use traditional tools such as Slate and Stylus or expensive typewriters as Perkins Brailler, often causing accessibility and affordability issues. This article introduces a novel portable, cost-effective device that helps VIP how to write by utilizing a deep-learning model to detect a Braille cell. Using deep learning instead of electrical circuits can reduce costs and enable a mobile app to act as a virtual teacher for blind users. The app could suggest sentences for the user to write and check their work, providing an independent learning platform. This feature is difficult to implement when using electronic circuits. A portable device generates Braille character cells using light- emitting diode (LED) arrays instead of Braille holes. A smartphone camera captures the image, which is then processed by a deep learning model to detect the Braille and convert it to English text. This article provides a new dataset for custom-Braille character cells. Moreover, applying a transfer learning technique on the mobile network version 2 (MobileNetv2) model offers a basis for the development of a comprehensive mobile application. The accuracy based on the model reached 97%.

Keywords


Assistive technology; Braille dataset; Braille typewriters; Computer vision; Machine learning; MobileNetV2

Full Text:

PDF


DOI: http://doi.org/10.11591/ijece.v14i6.pp6992-7000

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

International Journal of Electrical and Computer Engineering (IJECE)
p-ISSN 2088-8708, e-ISSN 2722-2578

This journal is published by the Institute of Advanced Engineering and Science (IAES) in collaboration with Intelektual Pustaka Media Utama (IPMU).