Skip to content

sumukshashidhar-archive/content_learning

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

content_learning

Parameter Efficient Content Learning For LLMs

Introduction

The goal of this codebase is to facilitate the continued pre-training, and then subsequent chat finetuning of open source LLMs on brand new content.

Preprocessing

You must first convert your content (PDF, EPUB, etc.) into a text file. This can be done using the scripts in the src/data_preprocessing directory. Read the Data Preprocessing Documentation for more information.

Sanitization and Chunking

Once you have a directory of text files, you must sanitize and tokenize them for pretraining, using the script in the src/sanitization directory. Read the Sanitization Documentation for more information. Note that this automatically assumes that you have existing chat data, to intersperse with your new content, to ensure that catostrophic forgetting of chat template data, does not occur.

About

Parameter Efficient Content Learning For LLMs

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published