From 8003584e035d4b416f719018dfe1d419cd223728 Mon Sep 17 00:00:00 2001 From: tintin Date: Tue, 11 Jul 2023 16:33:23 +0900 Subject: [PATCH] update model on gdrive --- .DS_Store | Bin 6148 -> 6148 bytes README.md | 42 ++++++++++++++++++++---------------------- 2 files changed, 20 insertions(+), 22 deletions(-) diff --git a/.DS_Store b/.DS_Store index c7a3fa58aff9d3fce13250834f6b944d2499216f..f09902034b81a4d01a5544ecb45738d2e99ec556 100644 GIT binary patch delta 106 zcmZoMXfc=|#>B)qu~2NHo+2a5#DLw41(+BaStj!^z7gbLC}1dJNM$Gil8FqN40)3! r7*Ak|ZhpzA%CecAgP#Lv(q=}c@640=MJzcO85n?wfnjri$QEV*Mba0K delta 68 zcmZoMXfc=|#>B`mu~2NHo+2aD#DLwC4MbQb^E18N{DoPVWfRLE#?9;;{2V|vn?Evt XXP(S2V#&b(1dI#}Oq&BlwlD(#o^lb| diff --git a/README.md b/README.md index c378bec..9da44e1 100644 --- a/README.md +++ b/README.md @@ -15,29 +15,29 @@ The system has 2 main modules: text line extraction and text line recognition. T For text line extraction, we retrain the CRAFT (Character Region Awareness for Text Detection) on 1000 annotated images provided by Center for Research and Development of Higher Education, The University of Tokyo. ![alt text](https://github.com/ducanh841988/Kindai-OCR/blob/master/images/TextlineRecognition.jpg "text line recognition") -Text line recognition, -For Kindai V1.0, we employ the attention-based encoder-decoder on our previous publication. We train the text line recognition on 1000 annotated images and 1600 unannotated images provided by Center for Research and Development of Higher Education, The University of Tokyo and National Institute for Japanese Language and Linguistics, respectively. -For Kindai V2.0, we trained a transformer with more data from National Diet Library and The Center for Open Data in The Humanities. +Text line recognition, +For Kindai V1.0, we employ the attention-based encoder-decoder on our previous publication. We train the text line recognition on 1000 annotated images and 1600 unannotated images provided by Center for Research and Development of Higher Education, The University of Tokyo and National Institute for Japanese Language and Linguistics, respectively. +For Kindai V2.0, we trained a transformer with more data from National Diet Library and The Center for Open Data in The Humanities. ## Installing Kindai OCR -Python==3.7.11 -torch==1.7.0 -torchvision==0.8.1 -opencv-python==3.4.2.17 -scikit-image==0.14.2 -scipy==1.1.0 -Polygon3 -pillow==4.3.0 -pytorch-lightning==1.3.5 -einops==0.3.0 -editdistance==0.5.3 +Python==3.7.11 +torch==1.7.0 +torchvision==0.8.1 +opencv-python==3.4.2.17 +scikit-image==0.14.2 +scipy==1.1.0 +Polygon3 +pillow==4.3.0 +pytorch-lightning==1.3.5 +einops==0.3.0 +editdistance==0.5.3 ## Running Kindai OCR -- You should first download the pre_trained models and put them into ./pretrain/ folder. -[VGG model](https://drive.google.com/file/d/1_A1dEFKxyiz4Eu1HOCDbjt1OPoEh90qr/view?usp=sharing), [CRAFT model](https://drive.google.com/file/d/1-9xt_jjs4btMrz5wzrU1-kyp2c6etFab/view?usp=sharing), [OCR V1.0 model](https://drive.google.com/file/d/1mibg7D2D5rvPhhenLeXNilSLMBloiexl/view?usp=sharing) -[OCR V2.0 model] () +- You should first download the pre_trained models and put them into ./pretrain/ folder. +[VGG model](https://drive.google.com/file/d/1_A1dEFKxyiz4Eu1HOCDbjt1OPoEh90qr/view?usp=sharing), [CRAFT model](https://drive.google.com/file/d/1-9xt_jjs4btMrz5wzrU1-kyp2c6etFab/view?usp=sharing), [OCR V1.0 model](https://drive.google.com/file/d/1mibg7D2D5rvPhhenLeXNilSLMBloiexl/view?usp=sharing) +[OCR V2.0 model] (https://drive.google.com/file/d/1cq4PwPS2mXXRjOApst2i7n4G3mBSVqpI/view?usp=drive_link) - Copy your images into ./data/test/ folder - run the following script to recognize images: `python test_kindai_1.0.py` @@ -49,17 +49,15 @@ editdistance==0.5.3 - using --canvas_size ot set image size for text line detection - An example result from our OCR system - + ## Running Kindai OCR If you find Kindai OCR useful in your research, please consider citing: Anh Duc Le, Daichi Mochihashi, Katsuya Masuda, Hideki Mima, and Nam Tuan Ly. 2019. Recognition of Japanese historical text lines by an attention-based encoder-decoder and text line generation. In Proceedings of the 5th International Workshop on Historical Document Imaging and Processing (HIP ’19). Association for Computing Machinery, New York, NY, USA, 37–41. DOI:https://doi.org/10.1145/3352631.3352641 - - + + ## Acknowledgment We thank The Center for Research and Development of Higher Education, The University of Tokyo, and National Institute for Japanese Language and Linguistics for providing the kindai datasets. ## Contact Dr. Anh Duc Le, email: leducanh841988@gmail.com or anh@ism.ac.jp - -