We explore the application of Vision Transformer (ViT) for handwritten text recognition. The limited availability of labeled data in this domain poses challenges for achieving high performance solely relying on ViT. Previous transformer-based models required external data or extensive pre-training on large datasets to excel. To address this limitation, we introduce a data-efficient ViT method that uses only the encoder of the standard transformer. We find that incorporating a Convolutional Neural Network (CNN) for feature extraction instead of the original patch embedding and employ Sharpness-Aware Minimization (SAM) optimizer to ensure that the model can converge towards flatter minima and yield notable enhancements. Furthermore, our introduction of the span mask technique, which masks interconnected features in the feature map, acts as an effective regularizer. Empirically, our approach competes favorably with traditional CNN-based models on small datasets like IAM and READ2016. Additionally, it establishes a new benchmark on the LAM dataset, currently the largest dataset with 19,830 training text lines. We further show that our method outstrips ViT and traditional CNN-based models in terms of data efficiency, and as the data volume increases, our method's performance surges at a pace swifter than ViT and traditional CNN-based models.
Our approach encodes a text-line image into features using a CNN feature extractor. The transformer encoder takes these features as input tokens output character predictions. During the training, the span input tokens are replaced by learnable mask tokens. The entire model is optimized using CTC loss.
Please refer to our paper for more experiments.
@article{li2024htr, title={HTR-VT: Handwritten text recognition with vision transformer}, author={Li, Yuting and Chen, Dexiong and Tang, Tinglong and Shen, Xi}, journal={Pattern Recognition}, pages={110967}, year={2024}, publisher={Elsevier} }
© This webpage was in part inspired from this template.