diff --git a/README.md b/README.md index af34f1e2..9f47d218 100644 --- a/README.md +++ b/README.md @@ -207,7 +207,7 @@ lighteval accelerate \ An alternative to launching the evaluation locally is to serve the model on a TGI-compatible server/container and then run the evaluation by sending requests to the server. The command is the same as before, except you specify a path to a yaml config file (detailed below): ```shell -python run_evals_accelerate.py \ +lighteval accelerate \ --model_config_path="/path/to/config/file"\ --tasks \ --output_dir output_dir @@ -262,7 +262,7 @@ lighteval accelerate \ ### Using the dummy model To debug or obtain random baseline scores for a given set of tasks, you can use the `dummy` model: ```shell -python run_evals_accelerate.py \ +lighteval accelerate \ --model_args "dummy"\ --tasks \ --output_dir output_dir @@ -279,7 +279,7 @@ However, we are very grateful to the Harness and HELM teams for their continued ## How to navigate this project `lighteval` is supposed to be used as a standalone evaluation library. -- To run the evaluations, you can use `run_evals_accelerate.py` or `run_evals_nanotron.py`. +- To run the evaluations, you can use `lighteval accelerate` or `lighteval nanotron`. - [src/lighteval](https://github.com/huggingface/lighteval/tree/main/src/lighteval) contains the core of the lib itself - [lighteval](https://github.com/huggingface/lighteval/tree/main/src/lighteval) contains the core of the library, divided in the following section - [main_accelerate.py](https://github.com/huggingface/lighteval/blob/main/src/lighteval/main_accelerate.py) and [main_nanotron.py](https://github.com/huggingface/lighteval/blob/main/src/lighteval/main_nanotron.py) are our entry points to run evaluation