Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Code for tuning babbage-002 as a GPT-Judge for TruthfulQA #2

Open
MihailSalnikov opened this issue Nov 5, 2024 · 1 comment
Open

Comments

@MihailSalnikov
Copy link

Dear authors,
Thank you for your work. I was interested in replicating results and further citing your method. Can you please share code for tuning this model to be sure that theGPT-Judge model is exactly the same as yours.

@dhx20150812
Copy link
Owner

Hi, thanks for your interest on our work.

We did not write code for fine-tuning model to calculate the GPT-Judge score.

Instead, we adopted the OpenAI official fine-tuning interface (with default parameters) and used the dataset from the TruthfulQA repo to fine-tune the babbage-002 model to calculate the GPT-Judge score. You can repeat it to obtain the fine-tuned model.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants