HTR-VT: Handwritten Text Recognition with Vision Transformer

Pattern Recognition

Yuting Li1, 3Dexiong Chen2Tinglong Tang1Xi Shen3

1 China Three Gorges University, China
2 Max Planck Institute of Biochemistry, Germany
3 Intellindust, China

Paper arXiv Code

Abstract


We explore the application of Vision Transformer (ViT) for handwritten text recognition. The limited availability of labeled data in this domain poses challenges for achieving high performance solely relying on ViT. Previous transformer-based models required external data or extensive pre-training on large datasets to excel. To address this limitation, we introduce a data-efficient ViT method that uses only the encoder of the standard transformer. We find that incorporating a Convolutional Neural Network (CNN) for feature extraction instead of the original patch embedding and employ Sharpness-Aware Minimization (SAM) optimizer to ensure that the model can converge towards flatter minima and yield notable enhancements. Furthermore, our introduction of the span mask technique, which masks interconnected features in the feature map, acts as an effective regularizer. Empirically, our approach competes favorably with traditional CNN-based models on small datasets like IAM and READ2016. Additionally, it establishes a new benchmark on the LAM dataset, currently the largest dataset with 19,830 training text lines. We further show that our method outstrips ViT and traditional CNN-based models in terms of data efficiency, and as the data volume increases, our method's performance surges at a pace swifter than ViT and traditional CNN-based models.

Method


workflow.jpg


Our approach encodes a text-line image into features using a CNN feature extractor. The transformer encoder takes these features as input tokens output character predictions. During the training, the span input tokens are replaced by learnable mask tokens. The entire model is optimized using CTC loss.

Results


Please refer to our paper for more experiments.

LAM

IAM

READ2016

clothing1m.jpg
clothing1m.jpg
animal.jpg

Visual Results


LAM

LAM

IAM

IAM

READ2016

READ2016

Attention Map

Attention Map

Resources


Paper

paper.jpg

arXiv

github_repo.jpg

Code

github_repo.jpg

BibTeX

If you find this work useful for your research, please cite:
          @article{li2024htr,
          title={HTR-VT: Handwritten text recognition with vision transformer},
          author={Li, Yuting and Chen, Dexiong and Tang, Tinglong and Shen, Xi},
          journal={Pattern Recognition},
          pages={110967},
          year={2024},
          publisher={Elsevier}
          }

© This webpage was in part inspired from this template.