diff --git a/README.md b/README.md index 9f94ee481..bafc235cc 100644 --- a/README.md +++ b/README.md @@ -2,14 +2,20 @@ text detection mainly based on ctpn (connectionist text proposal network). It is implemented in tensorflow. I use id card detect as an example to demonstrate the results, but it should be noticing that this model can be used in almost every horizontal scene text detection task. The origin paper can be found [here](https://arxiv.org/abs/1609.03605). Also, the origin repo in caffe can be found in [here](https://github.com/tianzhi0549/CTPN). For more detail about the paper and code, see this [blog](http://slade-ruan.me/2017/10/22/text-detection-ctpn/). If you got any questions, check the issue first, if the problem persists, open a new issue. *** -# setup -- requirements: tensorflow1.3, cython0.24, opencv-python, easydict,(recommend to install Anaconda) -- if you do not have a gpu device,follow here to [setup](https://github.com/eragonruan/text-detection-ctpn/issues/43) -- if you have a gpu device, build the library by +# roadmap +- [x] freeze the graph for convenient inference +- [x] pure python, cython nms and cuda nms +- [x] loss function as referred in paper +- [x] oriented text connector +- [x] BLSTM +*** +# demo +- for a quick demo,you don't have to build the library, simpely use demo_pb.py for inference. +- download the pb file from [release](https://github.com/eragonruan/text-detection-ctpn/releases) +- put ctpn.pb in data/ +- put your images in data/demo, the results will be saved in data/results, and run demo in the root ```shell -cd lib/utils -chmod +x make.sh -./make.sh +python ./ctpn/demo_pb.py ``` *** # parameters @@ -18,14 +24,16 @@ there are some parameters you may need to modify according to your requirement, - DETECT_MODE # H represents horizontal mode, O represents oriented mode, default is H - checkpoints_path # the model I provided is in checkpoints/, if you train the model by yourself,it will be saved in output/ *** -# demo -- download the checkpoints from release, unzip it in checkpoints/ -- put your images in data/demo, the results will be saved in data/results, and run demo in the root +# training +## setup +- requirements: python2.7, tensorflow1.3, cython0.24, opencv-python, easydict,(recommend to install Anaconda) +- if you do not have a gpu device,follow here to [setup](https://github.com/eragonruan/text-detection-ctpn/issues/43) +- if you have a gpu device, build the library by ```shell -python ./ctpn/demo.py +cd lib/utils +chmod +x make.sh +./make.sh ``` -*** -# training ## prepare data - First, download the pre-trained model of VGG net and put it in data/pretrain/VGG_imagenet.npy. you can download it from [google drive](https://drive.google.com/open?id=0B_WmJoEtfQhDRl82b1dJTjB2ZGc) or [baidu yun](https://pan.baidu.com/s/1kUNTl1l). - Second, prepare the training data as referred in paper, or you can download the data I prepared from [google drive](https://drive.google.com/open?id=0B_WmJoEtfGhDRl82b1dJTjB2ZGc) or [baidu yun](https://pan.baidu.com/s/1kUNTl1l). Or you can prepare your own data according to the following steps. @@ -52,17 +60,6 @@ python ./ctpn/train_net.py - The model I provided in checkpoints is trained on GTX1070 for 50k iters. - If you are using cuda nms, it takes about 0.2s per iter. So it will takes about 2.5 hours to finished 50k iterations. *** -# roadmap -- [x] cython nms -- [x] cuda nms -- [x] python2/python3 compatblity -- [x] tensorflow1.3 -- [x] delete useless code -- [x] loss function as referred in paper -- [x] oriented text connector -- [x] BLSTM -- [ ] side refinement -*** # some results `NOTICE:` all the photos used below are collected from the internet. If it affects you, please contact me to delete them. diff --git a/lib/fast_rcnn/__init__.py b/lib/fast_rcnn/__init__.py index 3fbe11d9b..e69de29bb 100644 --- a/lib/fast_rcnn/__init__.py +++ b/lib/fast_rcnn/__init__.py @@ -1,4 +0,0 @@ -from . import config -from . import train -from . import test -from . import nms_wrapper