Experimental, use with care.
pd3f-core
is Python package to reconstruct the original continuous text from PDFs with language models.
pd3f-core
assumes your PDF is either text-based or already OCRd.
pd3f-core
is at the heart of pd3f: a full Docker-based text extraction pipeline (including OCR).
pd3f-core
first uses Parsr to chunk PDFs into lines and paragraphs.
Then, it uses the Python package dehyphen to reconstruct the paragraphs in the most probable way.
The probability is derived by calculating the perplexity with Flair's character-based language models.
Unnecessary hyphens are removed, space or new lines are kept or dropt depending on the surround words.
It's mainly developed for German but should work with other languages as well. The project is still in an early stage. Expect rough edges and rapid changes.
API Documentation of pd3f-core: https://pd3f.github.io/pd3f-core/index.html
Documentation of pd3f (the ): https://pd3f.com/docs/
Check if two lines can be joined by removing hyphens ('-').
Decide between adding a simple space (' ') or a new line ('\n') when joining lines.
Check if the last paragraph of a page und the first paragraph of the following page can be joined.
In order to join paragraphs (and reverse page breaks), detect footnotes and turn them into endnotes. For now, the footnotes are pulled to the end of a file.
If the header or the footer are the same for all pages, only display them once. Headers are pulled to the start of the document and footer to the end. Some heuristic based on the similarity of footers are used. (Jaccard distance for text, and compare overlapping shapes)
pip install pd3f
or
poetry add pd3f
Start a local Parsr instance:
docker-compose up
(You may also use tunnel a remote Parsr instance (script) or choose a remote address.)
from pd3f import extract
text, tables = extract(file_path, tables=False, experimental=False, force_gpu=False, lang="multi", fast=False, parsr_location="localhost:3001")
Explanations of the paramaters in the docs: https://pd3f.github.io/pd3f-core/export.html#pd3f.export.extract
Using CUDA speeds up the evaluation with Flair. But you need an (expensive) GPU. You need to set up your GPU with CUDA. Here a guide for Ubuntu 18.04
- install conda (via miniconda) and poetry
- create a new conda enviroment & activate it
- Install PyTorch with CUDA:
conda install pytorch torchvision cudatoolkit=10.2 -c pytorch
(example) - Install
pd3f-core
with poetry:poetry add pd3f
Poetry realizes that it is run within a conda virtual env so it doesn't create a new one. Since setting up CUDA is hard, install it with the most easy way (with conda).
At the heart of pd3f-core
is the JSON output of Parsr.
Some comments on how and why certain things were chosen.
Parsr's documentation about the different modules
Parsr has several module to classify paragraphs into certain types. They offer a list detections as well as an heading detection. In my experience, the accuracy is too low for both, so we don't use it right now. This also means all the extracted (output) text is flat (no headings, different formattings etc.).
We enable Drawing + Image Detection because we may need to understand what paragraph is following which other one.
This may be helpful when to decide whether to join paragraphs.
But it's dropped when activating the fast
setting.
In the JSON output is a field pageNumber
.
This comes from the page detection module.
So pageNumber
is derived from header / footer of each page.
So it may be different from the index in the page array.
Don't relay on pageNumber
in the JSON output.
words-to-line-new
has be used like this.
There is no error but the accuracy decreases if it used otherwise.
"words-to-line-new",
[
"reading-order-detection",
Don't do OCR with Parsr because the results are worse than OCRmyPDF (because the latter uses image preprocessing).
- make reverse page break work without requiring the experimental features
Install and use poetry.
Affero General Public License 3.0