Parameter Efficient Content Learning For LLMs
The goal of this codebase is to facilitate the continued pre-training, and then subsequent chat finetuning of open source LLMs on brand new content.
You must first convert your content (PDF, EPUB, etc.) into a text file. This can be done using the scripts in the src/data_preprocessing
directory. Read the Data Preprocessing Documentation for more information.
Once you have a directory of text files, you must sanitize and tokenize them for pretraining, using the script in the src/sanitization
directory. Read the Sanitization Documentation for more information. Note that this automatically assumes that you have existing chat data, to intersperse with your new content, to ensure that catostrophic forgetting of chat template data, does not occur.