Dominating set based arbitrary oriented bilingual scene text localization

Roopa Mirle Jayanth, Mahantesh Kapanaiah


Localizing and recognizing arbitrarily oriented text in natural scene images is the biggest challenge. It is because scene texts are often erratic in shapes. This paper presents a simple and effective graph representational algorithm for detecting arbitrary-oriented text location to smoothen the text recognition process because of its high impact and simplicity of representation. An arbitrarily oriented text can be horizontal, vertical, perspective, curved (diagonal/off-diagonal), or even a combination. As a pre-processing step, image enhancement is performed in the frequency domain to improve the representation of images that are invariant to intensity. It is necessary to draw bounding boxes for each candidate character in the scene images to extract text regions. This step is carried out by utilizing the advantage of the region-based approach called maximally stable extremal regions. A typical problem with curved text localization is that non-text objects may occur within localized text regions. Our method is the first in the literature that searches for dominating sets to solve this problem. This dominating set method outperforms several traditional methods, including deep learning methods used for arbitrary text localization, on challenging datasets like 13th international conference on document analysis and recognition (ICDAR 2015), multi-script robust reading competition (MRRC), CurvedText 80 (CUTE80), and arbitrary text (ArT).


arbitrary oriented text; dominating set; maximally stable extremal regions; scene images;

Full Text:



Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

International Journal of Electrical and Computer Engineering (IJECE)
p-ISSN 2088-8708, e-ISSN 2722-2578

This journal is published by the Institute of Advanced Engineering and Science (IAES) in collaboration with Intelektual Pustaka Media Utama (IPMU).