diff --git a/.DS_Store b/.DS_Store index c7a3fa5..f099020 100644 Binary files a/.DS_Store and b/.DS_Store differ diff --git a/README.md b/README.md index c378bec..9da44e1 100644 --- a/README.md +++ b/README.md @@ -15,29 +15,29 @@ The system has 2 main modules: text line extraction and text line recognition. T For text line extraction, we retrain the CRAFT (Character Region Awareness for Text Detection) on 1000 annotated images provided by Center for Research and Development of Higher Education, The University of Tokyo. ![alt text](https://github.com/ducanh841988/Kindai-OCR/blob/master/images/TextlineRecognition.jpg "text line recognition") -Text line recognition, -For Kindai V1.0, we employ the attention-based encoder-decoder on our previous publication. We train the text line recognition on 1000 annotated images and 1600 unannotated images provided by Center for Research and Development of Higher Education, The University of Tokyo and National Institute for Japanese Language and Linguistics, respectively. -For Kindai V2.0, we trained a transformer with more data from National Diet Library and The Center for Open Data in The Humanities. +Text line recognition, +For Kindai V1.0, we employ the attention-based encoder-decoder on our previous publication. We train the text line recognition on 1000 annotated images and 1600 unannotated images provided by Center for Research and Development of Higher Education, The University of Tokyo and National Institute for Japanese Language and Linguistics, respectively. +For Kindai V2.0, we trained a transformer with more data from National Diet Library and The Center for Open Data in The Humanities. ## Installing Kindai OCR -Python==3.7.11 -torch==1.7.0 -torchvision==0.8.1 -opencv-python==3.4.2.17 -scikit-image==0.14.2 -scipy==1.1.0 -Polygon3 -pillow==4.3.0 -pytorch-lightning==1.3.5 -einops==0.3.0 -editdistance==0.5.3 +Python==3.7.11 +torch==1.7.0 +torchvision==0.8.1 +opencv-python==3.4.2.17 +scikit-image==0.14.2 +scipy==1.1.0 +Polygon3 +pillow==4.3.0 +pytorch-lightning==1.3.5 +einops==0.3.0 +editdistance==0.5.3 ## Running Kindai OCR -- You should first download the pre_trained models and put them into ./pretrain/ folder. -[VGG model](https://drive.google.com/file/d/1_A1dEFKxyiz4Eu1HOCDbjt1OPoEh90qr/view?usp=sharing), [CRAFT model](https://drive.google.com/file/d/1-9xt_jjs4btMrz5wzrU1-kyp2c6etFab/view?usp=sharing), [OCR V1.0 model](https://drive.google.com/file/d/1mibg7D2D5rvPhhenLeXNilSLMBloiexl/view?usp=sharing) -[OCR V2.0 model] () +- You should first download the pre_trained models and put them into ./pretrain/ folder. +[VGG model](https://drive.google.com/file/d/1_A1dEFKxyiz4Eu1HOCDbjt1OPoEh90qr/view?usp=sharing), [CRAFT model](https://drive.google.com/file/d/1-9xt_jjs4btMrz5wzrU1-kyp2c6etFab/view?usp=sharing), [OCR V1.0 model](https://drive.google.com/file/d/1mibg7D2D5rvPhhenLeXNilSLMBloiexl/view?usp=sharing) +[OCR V2.0 model] (https://drive.google.com/file/d/1cq4PwPS2mXXRjOApst2i7n4G3mBSVqpI/view?usp=drive_link) - Copy your images into ./data/test/ folder - run the following script to recognize images: `python test_kindai_1.0.py` @@ -49,17 +49,15 @@ editdistance==0.5.3 - using --canvas_size ot set image size for text line detection - An example result from our OCR system - + ## Running Kindai OCR If you find Kindai OCR useful in your research, please consider citing: Anh Duc Le, Daichi Mochihashi, Katsuya Masuda, Hideki Mima, and Nam Tuan Ly. 2019. Recognition of Japanese historical text lines by an attention-based encoder-decoder and text line generation. In Proceedings of the 5th International Workshop on Historical Document Imaging and Processing (HIP ’19). Association for Computing Machinery, New York, NY, USA, 37–41. DOI:https://doi.org/10.1145/3352631.3352641 - - + + ## Acknowledgment We thank The Center for Research and Development of Higher Education, The University of Tokyo, and National Institute for Japanese Language and Linguistics for providing the kindai datasets. ## Contact Dr. Anh Duc Le, email: leducanh841988@gmail.com or anh@ism.ac.jp - -