-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Evaluation Dataset #8
Comments
Hi! Thanks for bringing this up. Have you created your own dataset? If so, please make sure it can be loaded from the path in the error. You can also upload your datasets to H, uncomment this line and comment out the current loading. |
Thank you for getting back to me. I want to evaluate the poison model's ability to handle harmful questions using a specific trigger ("SUDO"). |
The code assumes you had generated the corresponding dataset. If you want to quickly evaluate, you can:
This will only work for SUDO, that is the only dataset for which we have an evaluation dataset hosted on HF. Let me know if it works. |
I haven't gotten approval for ethz-spylab/harmless-eval-SUDO dataset, so now it's still showing an error. Thank you! |
It works!! Thanks!! |
Hello,
I would like to ask how to create an evaluation dataset.
When I directly run
python evaluate_generation_model.py --model_path ../../LLM_Models/poison-7b-SUDO- --token SUDO --report_path ./results/rlhf/poison-7b-SUDO-10
, there is an error showing `FileNotFoundError: Couldn't find a dataset script at /home/ai/Chia/rlhf-poisoning/safe_rlhf/datasets/raw/datasets/data/harmless-eval-SUDO/harmless-eval-SUDO.py or any data file in the same directory.Thank you!`
The text was updated successfully, but these errors were encountered: