Contribution of recurrent connectionist language models in improving LSTM-based Arabic text recognition in videos
Unconstrained text recognition in videos is a very challenging task that begins to draw the attention of the OCR community. However, for Arabic video contents, this problem is much less addressed compared at least with Latin script. This work presents our latest contribution to this task, introducin...
Saved in:
Published in: | Pattern recognition Vol. 64; pp. 245 - 254 |
---|---|
Main Authors: | , , |
Format: | Journal Article |
Language: | English |
Published: |
Elsevier Ltd
01-04-2017
Elsevier |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Unconstrained text recognition in videos is a very challenging task that begins to draw the attention of the OCR community. However, for Arabic video contents, this problem is much less addressed compared at least with Latin script. This work presents our latest contribution to this task, introducing recurrent connectionist language modeling in order to improve Long-Short Term Memory (LSTM) based Arabic text recognition in videos. For a LSTM OCR system that basically yields high recognition rates, introducing proper language models can easily deteriorate results. In this work, we focus on two main factors to reach better improvements. First, we propose to use Recurrent Neural Network (RNN) for language modeling that are able to capture long range linguistic dependencies. We use simple RNN models and models that are learned jointly with a Maximum Entropy language model. Second, for the decoding schema, we are not limited to a n-best rescoring of the OCR hypotheses. Instead, we propose a modified beam search algorithm that uses both OCR and language model probabilities in parallel at each decoding time-step. We introduce a set of hyper-parameters to the algorithm in order to boost recognition results and to control the decoding time. The method is used for Arabic text recognition in TV Broadcast. We conduct an extensive evaluation of the method and study the impact of the language models and the decoding parameters. Results show an improvement of 16% in terms of word recognition rate (WRR) over the baseline that uses only the OCR responses, while keeping a reasonable response time. Moreover, the proposed recurrent connectionist models outperform frequency-based models by more than 4% in terms of WRR. The final recognition schema provides outstanding results that outperform well-known commercial OCR engine by more than 36% in terms of WRR.
•Different recurrent connectionist language models to improve LSTM-based Arabic text recognition in videos.•Efficient joint decoding paradigm using language model and LSTM responses.•Additional decoding hyper-parameters, extensively evaluated, that improve recognition results and optimize running time.•Significant recognition improvement by integrating connectionist language models that outperform n-grams contribution.•Final Arabic OCR system that significantly outperforms commercial OCR engine. |
---|---|
ISSN: | 0031-3203 1873-5142 |
DOI: | 10.1016/j.patcog.2016.11.011 |