-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can't find the dataset lighteval/MATH-Hard in the huggingface #2618
Comments
Have the same issue |
Hi! yeah, it doesn't seem to be up anymore. cc: @clefourrier @NathanHB |
Did one find a temporary solution for this with an exactly similar dataset? I reckon this dataset would probably be uploaded to HF multiple times, so forking lm_eval or monkey patching with the reference to that one would be feasible for now. How long does it usually take before these kind of problems are resolved? Thanks in advance. |
As it's the same dataset as MATH, but they only use level-5 problems, you could change the dataset_path here to Also add |
Thank you very much, will fork it and implement a custom solution! |
Thank you for the answer!I will try it. |
when I run the code to get the results of the tasks in the open llm leaderboard v2, I can't find lighteval/MATH-Hard in the huggingface, how should I solve this problem?
Thanks!
The text was updated successfully, but these errors were encountered: