Error when running Hugging Face models on M2 Pro - prompting LLM in order to create training data #349
Replies: 1 comment 1 reply
-
Hi Silas, unfortunately this is a PyTorch limitation. You could use a different model that uses float16 as its dtype, but those are usually hard to come by as bfloat16 has become the de-facto standard for training LLMs.
One workaround could be to modify the
This only applies to models trained with Thinc, which the Hugging Face models haven't. |
Beta Was this translation helpful? Give feedback.
-
Hi, I wish to create training data for a NER model. My plan was to do so by using spacy-llm, more specifically by using the Prodigy manual "ner.llm.correct". This seems to be working with ChatGPT and Cohere. However with ChatGPT I ran out of credit and with Cohere I hit a wall due to the limitation on the free trial API key.
Therefore I decided to have a look at the models distributed through Hugging Face (HF). I started by looking at Llama2 and got permission to the Llama2 repo through HF. However, when running my code I receive the error:
"TypeError: BFloat16 is not supported on MPS"
As far as I can tell this error originates from PyTorch see here and based on this thread pytorch/pytorch#99272 the issue does not seem to be resolved.
My question is whether or not there is a know workaround or solution to this problem, which enables me to use spacy-llm?
FYI, I have read the blog post https://explosion.ai/blog/metal-performance-shaders. However it is not clear to me if this approach could somehow be used with spacy-llm.
CONFIG FILE: fewshots.cfg
MAIN FILE: main.py
DATA FILE: statement_titles_sub.jsonl
Thank you in advance.
Best regards
Silas
Beta Was this translation helpful? Give feedback.
All reactions