Skip to content

Reproducing "Writing with Transformer" demo, using aitextgen/FastAPI in backend, Quill/React in frontend

License

Notifications You must be signed in to change notification settings

jgoodrich77/writing-with-gpt2

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

35 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Writing with GPT-2

Development

Python Backend

Make sure you are in the backend folder:

cd backend/

Install a virtual environment:

# If using venv
python3 -m venv venv
. venv/bin/activate

# If using conda
conda create -n write-with-gpt2 python=3.7
conda activate write-with-gpt2

# On Windows I use Conda to install pytorch separately
conda install pytorch cpuonly -c pytorch

# When environment is activated
pip install -r requirements.txt
python app.py

To run in hot module reloading mode:

uvicorn app:app --host 0.0.0.0 --reload

Runs on http://localhost:8000. You can consult interactive API on http://localhost:8000/docs.

Configuration is made via environment variable or .env file. Available are:

  • MODEL_NAME:
    • to use a custom model, point to the location of the pytorch_model.bin. You will also need to pass config.json through CONFIG_FILE.
    • otherwise model from Huggingface's repository of models, defaults to distilgpt2.
  • CONFIG_FILE: path to JSON file of model architecture.
  • USE_GPU: True to generate text from GPU.

From gpt-2-simple to Pytorch

To convert gpt-2-simple model to Pytorch, see Importing from gpt-2-simple:

transformers-cli convert --model_type gpt2 --tf_checkpoint checkpoint/run1 --pytorch_dump_output pytorch --config checkpoint/run1/hparams.json

This will put a pytorch_model.bin and config.json in the pytorch folder, which is what you'll need to pass to .env file to load the model.

React Frontend

Make sure you are in the frontend folder, and ensure backend API is working.

cd frontend/
npm install # Install npm dependencies
npm run start # Start Webpack dev server

Web app now available on http://localhost:3000.

To create a production build:

npm run build
serve -s build # not working, did not setup redirection to API

Using GPU

Miniconda/Anaconda recommended on Windows.

conda command : conda install pytorch cudatoolkit=10.2 -c pytorch.

If you install manually, you can check your currently installed CUDA toolkit version with nvcc --version. Once you have CUDA toolkit installed, you can verify it by running nvidia-smi.

Beware: after installing CUDA, it seems you shouldn't try to update GPU driver though GeForce or else you'll have to reinstall CUDA toolkit ?

References

About

Reproducing "Writing with Transformer" demo, using aitextgen/FastAPI in backend, Quill/React in frontend

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • JavaScript 77.8%
  • Python 10.7%
  • CSS 7.0%
  • HTML 4.4%
  • Shell 0.1%