Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
kingyiusuen committed Aug 27, 2021
1 parent f5fee1e commit 874c81f
Showing 1 changed file with 40 additions and 11 deletions.
51 changes: 40 additions & 11 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,11 @@ An application that maps an image of a LaTeX math equation to LaTeX code.

## Introduction

The problem of image-to-markup generation has been attempted by [Deng et al. (2016)](https://arxiv.org/pdf/1609.04938v1.pdf). They provide the raw and preprocessed versions of [im2latex-100K](http://lstm.seas.harvard.edu/latex/data/), a dataset consisting of about 100K LaTeX math equation images. Using their dataset, I trained a model that uses ResNet-18 as encoder with 2D positional encoding and a Transformer as decoder with cross-entropy loss. (Similar to the one described in [Singh et al. (2021)](https://arxiv.org/pdf/2103.06450.pdf), except that I used ResNet only up to block 3 to reduce computational costs, and I excluded the line number encoding as it doesn't apply to this problem.)
The problem of image-to-markup generation was attempted by [Deng et al. (2016)](https://arxiv.org/pdf/1609.04938v1.pdf). They extracted about 100K formulas by parsing LaTeX sources of papers from the arXiv. They rendered the formulas using pdflatex and converted the rendered PDF files to PNG format. The raw and preprocessed versions of their dataset are available [online](http://lstm.seas.harvard.edu/latex/data/). In their model, a CNN is first used to extract image features. The rows of the features are then encoded using a RNN. Finally, the encoded features are used by an RNN decoder with an attention mechanism. The model has 9.48 million parameters in total. Recently, Transformer has overtaken RNN for many language tasks, so I thought I might give it try in this problem.

## Methods

Using their dataset, I trained a model that uses ResNet-18 as encoder with 2D positional encoding and a Transformer as decoder with cross-entropy loss. (Similar to the one described in [Singh et al. (2021)](https://arxiv.org/pdf/2103.06450.pdf), except that I used ResNet only up to block 3 to reduce computational costs, and I excluded the line number encoding as it doesn't apply to this problem.) The model has about 3 million parameters.

<img src="figures/model_architecture.png" alt="Model Architecture" width="384">

Expand All @@ -22,19 +26,44 @@ To this end, I used the raw dataset and included image augmentation (e.g. random

Additional problems that I faced in the dataset:
- Some latex code produces visually identical outputs (e.g. `\left(` and `\right)` look the same as `(` and `)`), so I normalized them.
- Some latex code is used to add space (e.g. `\vspace{2px}` and `\hspace{0.3mm}`). However, the length of the space is diffcult to judge even for humans. Also, there are many ways to express the same spacing (e.g. 1 cm = 10 mm). Finally, I don't want the model to generate code on blank images, so I just removed them.
- Some latex code is used to add space (e.g. `\vspace{2px}` and `\hspace{0.3mm}`). However, the length of the space is diffcult to judge even for humans. Also, there are many ways to express the same spacing (e.g. 1 cm = 10 mm). Finally, I don't want the model to generate code on blank images, so I removed them. (I only removed `\vspace` and `\hspace`, but turns out there are a lot of commands for horizontal spacing. I only realized that during error analysis. See below.)

## Results

The [best run](https://wandb.ai/kingyiusuen/image-to-latex/runs/1w1abmg1/) has a character error rate (CER) of 0.17 in test set. Here is an example from the test dataset:

<img width="480" src="https://user-images.githubusercontent.com/14181114/131140417-38d2e647-8316-41d5-9b81-583ecd2668a0.png">

- The input image and the model prediction look identical. But in the ground truth label, the horizontal spacing was created using `~`, whereas the model used `\,`, so this was still counted as an error.

I also took some screenshots in some random Wikipedia articles to see whether the model generalizes to images outside of the dataset:

<img width="480" alt="Screen Shot 2021-08-27 at 8 06 54 AM" src="https://user-images.githubusercontent.com/14181114/131131947-fd857bd6-17e9-4a00-87d0-6ba04442c730.png">

- The model output is actually correct but for some reason Streamlit can't render code with `\cal`.

<img width="480" src="https://user-images.githubusercontent.com/14181114/131130008-867e7373-67cb-44fb-abdb-b2eb2b6d6dd9.png">

- Incorrectly bolded some of the symbols.

The model also seems to have some trouble when the image is larger than what those in the dataset. Perhaps I should have increased the range of rescaling factor in the data augmentation process.

## Discussion

I think I should have defined the scope of the project better:

- Do I want the model to tell the difference between regular-sized and large parentheses (e.g. `(`, `\big(`, `\Big(`, `\bigg(`, `\Bigg(`)?
- Do I want the model to recognize horizontal and vertical spacing? (There are [over 40 commands for horizontal spacing](https://tex.stackexchange.com/a/74354).)
- Do I want to the model to recognize different font styles? (Here is [a list of available font style in LaTex](https://tex.stackexchange.com/a/58124).)
- etc.

The [best run](https://wandb.ai/kingyiusuen/image-to-latex/runs/1w1abmg1/) has a character error rate (CER) of 0.17 in test set. Most errors seem to come from unnecessary horizontal spacing, e.g., `\;`, `\,` and `\qquad`. (I wasn't aware of these horizontal spacing commands. I only removed `\vspace` and `\hspace` during preprocessing.) Also, the model occassionally makes the text bold when it shouldn't have.
These questions should be used to guide the data cleaning process.

Possible improvements include:
I found a pretty established tool called [Mathpix Snip](https://mathpix.com/) that converts handwritten formulas into LaTex code. Its [vocabulary size](https://docs.mathpix.com/#vocabulary) is around 200. Excluding numbers and English letters, the number of LaTex commands it can produce is actually just above 100. (The vocabulary size of im2latex-100k is almost 500). It only includes two horizontal spacing commands (`\quad` and `\qquad`), and it doesn't recognize different sizes of parentheses. Perphas confining to a limited set of vocabulary is what I should have done, since there are so many ambiguities in real-world LaTeX.

- Do a better job cleaning the data (e.g., removing spacing commands)
- Train the model for more epochs (for the sake of time, I only trained the model for 15 epochs, but the validation loss is still going down)
- Use beam search (I only implemented greedy search)
- Use a larger model (e.g., use ResNet-34 instead of ResNet-18)
- Do some hyperparameter tuning
Obvious possible improvements of this work include (1) training the model for more epochs (for the sake of time, I only trained the model for 15 epochs, but the validation loss is still going down), (2) using beam search (I only implemented greedy search), (3) using a larger model (e.g., use ResNet-34 instead of ResNet-18) and doing some hyperparameter tuning. I didn't do any of these, because I had limited computational resources (I was using Google Colab). But ultimately, I believe having data that don't have ambiguous labels and doing more data augmentation are the keys to the success of this problem.

I didn't do any of these, because I had limited computational resources (I was using Google Colab).
The model performacne is not as good as I want to be, but I hope the lessons I learned from this project are useful to someone wants to tackle similar problems in the future.

## How To Use

Expand Down Expand Up @@ -130,7 +159,7 @@ To run the Streamlit app, create a new terminal window and use the following com
make streamlit
```

The app should be opened in your browser automatically. You can also open it by visiting [http://localhost:8501](http://localhost:8501). For the app to work, you need to download the artifacts of an experiment run (see [above](#Experiment-Tracking-using-Weights-&-Biases)) and have the API up and running.
The app should be opened in your browser automatically. You can also open it by visiting [http://localhost:8501](http://localhost:8501). For the app to work, you need to download the artifacts of an experiment run (see above) and have the API up and running.

To create a Docker image for the API:

Expand Down

0 comments on commit 874c81f

Please sign in to comment.