-
Notifications
You must be signed in to change notification settings - Fork 120
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to prevent caching? #1263
Comments
Hi @keithachorn-intel we'll add the @anandhu-eng we can follow up our discussion for |
Sure @arjunsuresh 🤝 |
I am returning to this thread for a separate download attempt This is the package I'm trying download: https://github.com/mlcommons/cm4mlops/tree/mlperf-inference/script/get-ml-model-llama2 It appears to download fully to the cache, but I cannot get it to find the intended directory. I've tried:
None appeared effective at setting the final model download location. Any suggestions? |
Based on your previous request, now we have Also, we are now supporting mlperf-automations via MLCFLow in the MLPerf Automations repository, so not sure if this option is working on the For llama2-70b checkpoint from MLCommons (for submission) you can do
7b model:
For llama2 70b checkpoint from Huggingface you can do
|
Hi @arjunsuresh . Thank you for the quick reply. I did try adding 'outdirname' (mentioned above), which only worked for downloading the dataset script, but not the model script. However, your 'mlcr' script did work for my needs. Thank you. |
You're welcome @keithachorn-intel Glad that it worked. Sorry that there was an issue with the model variants if you were downloading from MLCommons and not Huggingface. Just fixed it now. Please see the updated commands. |
I’m glad the issue is resolved, @keithachorn-intel ! I will go ahead and close this ticket. Please don’t hesitate to reach out if you have any further questions! |
I am using cm to download the MLPerf DLRM model (~100G) using cm. However, I want to specify the final location of this dataset. By default, it resides in a 'cache' directory with a pseudo-random key in the filepath, so I cannot predict the final location beforehand. Ideally, I want to simply specify the output directory or prevent caching so that it will land in the local dir.
However, despite searching for a way to do this with the documentation in this repo (and trying '--no-cache') the model continues to be cached. Any guidance here?
The text was updated successfully, but these errors were encountered: