You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There are several use cases that require benchmarking, which scopes out of just inferences. This issue is created for integrating training benchmarking into FMBench.
We will experiment with training a llama3-8b on trn1.32xlarge using Hugging Face's Optimum Neuron library. Optimum Neuron is the interface between the Transformers library and AWS Accelerators. It provides a set of tools enabling easy model loading, training and inference on single- and multi-Accelerator settings for different downstream tasks. The list of officially validated models and tasks is available here. Users can try other models and tasks with only few changes.
There are several use cases that require benchmarking, which scopes out of just inferences. This issue is created for integrating training benchmarking into FMBench.
We will experiment with training a llama3-8b on trn1.32xlarge using Hugging Face's Optimum Neuron library. Optimum Neuron is the interface between the Transformers library and AWS Accelerators. It provides a set of tools enabling easy model loading, training and inference on single- and multi-Accelerator settings for different downstream tasks. The list of officially validated models and tasks is available here. Users can try other models and tasks with only few changes.
Link: https://huggingface.co/docs/optimum-neuron/en/training_tutorials/finetune_llm
The text was updated successfully, but these errors were encountered: