-
Notifications
You must be signed in to change notification settings - Fork 49
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Requesting the 'pl_train' module from src #6
Comments
Hi @YH-UtMSB, we relied on the T-Few fine tuning method and, hence, the If you have any additional problems, please let us know! Best, |
Hi authors, I'd like to reproduce the results you displayed on the paper "TabLLM: Few-shot Classification of Tabular Data with Large Language Models" table 1, especially the TabLLM rows for each of the dataset. I understand that it requires pl_train.py from the t-few repository you mentioned, but the script itself is not on the current repository, so can you show me how I can properly run the training of the T0 LLM and run the inference on each of the serialized datasets? |
Hi @hansfarrell , sorry for the late reply and thanks for your interest in our work! As stated in the readme, we only include our changes of the the t-few codebase. Hence, if you clone the t-few repository and add the files given in the Let us know if this works for you or if you encounter any issues! Best, |
Hello @YH-UtMSB and @hansfarrell, just as a heads-up: based on the feedback in the issues, we updated the readme now with all steps to reproduce a performance entry from the paper. Maybe that is also helpful for you! |
Hello TabLLM authors, thanks for providing the source code. I am especially interested in the fine-tuning scheme of your model. However, I could not find any training script from the current repository. The closest thing I found is the 'src.pl_train', which is mentioned in the shell script located in t-few/bin, and appears to be the module that was executed over there. Could you kindly provide this module?
The text was updated successfully, but these errors were encountered: