Skip to content

Modified origianl Image to LaTeX (Seq2seq + Attention with Beam Search) - Tensorflow with Adaptive Attention

License

Notifications You must be signed in to change notification settings

Derolik666/Image_to_Latex

 
 

Repository files navigation

Translating Image to LaTeX with Adaptive Attention

Project Report can be accessed here.

Problem Statement

  • LaTeX has become the most popular typesetting system in both scientific and industrial fields with its ability to generate beautiful mathematical equations
  • However, once an equation is rendered, the output cannot be modified without access to the underlying code, and it is very time consuming and error-prone for someone to re-type lengthy equations by just looking at the image.

Solution

  • Based on Genthial and Deng, our team implemented adaptive attention purposed by Xu to further improve the network performance

Results

  • The training and validation perplexities of our model

Screenshot

  • Model Performance
Decoder EM(I) BLEU ED (T) ED (I)
CNNEnc 0.53 0.75 NR 0.61
Greedy 0.22 0.76 0.76 0.35
Beam 0.32 0.78 0.76 0.62
Our model 0.55 0.88 0.91 0.78

Result comparison of our model to the two models purposed by Deng and Genthial (One with beam search the other with greedy search). EM stands for exact match, ED means edit distance, T and I are text based and image based.

Abstract

LaTeX has become the most popular typesetting system in both scientific and industrial fields with its ability to generate beautiful mathematical equations. However, once an equation is rendered, the output cannot be modified without access to the underlying code, and it is very time consuming and error-prone for someone to re-type lengthy equations by just looking at the image. To improve working efficiency, our team believes that it is worth studying to translate mathematical equations directly from image to LaTeX code. As a sub-problem under image captioning, we decided to solve the problem with an encoder-decoder model framework, started with the model structure proposed by Genthial, and implemented adaptive attention mechanism as our new approach. Thus, our model consists of a Convolutional Neural Network (CNN) encoder, a Long Short Term Memory (LSTM) decoder with adaptive attention which knows for each time step whether it is worth looking at the image. Our model significantly outperformed the existing models with a BLEU score of 88% (13% increase)and an image edit distance of 78% (25% increase).

Install

Install pdflatex (latex to pdf) and ghostsript + magick (pdf to png) on Linux

make install-linux

(takes a while ~ 10 min, installs from source)

On Mac, assuming you already have a LaTeX distribution installed, you should have pdflatex and ghostscript installed, so you just need to install magick. You can try

make install-mac

Getting Started

We provide a small dataset just to check the pipeline. To build the images, train the model and evaluate

make small

You should observe that the model starts to produce reasonable patterns of LaTeX after a few minutes.

Data

We provide the pre-processed formulas from Harvard but you'll need to produce the images from those formulas (a few hours on a laptop).

make build

Alternatively, you can download the prebuilt dataset from Harvard and use their preprocessing scripts found here

Training on the full dataset

If you already did make build you can just train and evaluate the model with the following commands

make train
make eval

Or, to build the images from the formulas, train the model and evaluate, run

make full

Details

  1. Build the images from the formulas, write the matching file and extract the vocabulary. Run only once for a dataset
python build.py --data=configs/data.json --vocab=configs/vocab.json
  1. Train
python train.py --data=configs/data.json --vocab=configs/vocab.json --training=configs/training.json --model=configs/model.json --output=results/full/
  1. Evaluate the text metrics
python evaluate_txt.py --results=results/full/
  1. Evaluate the image metrics
python evaluate_img.py --results=results/full/

(To get more information on the arguments, run)

python file.py --help

About

Modified origianl Image to LaTeX (Seq2seq + Attention with Beam Search) - Tensorflow with Adaptive Attention

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 98.5%
  • Makefile 1.5%