A framework for gold standard language model evaluations
Evalchemy is a unified and easy-to-use toolkit for evaluating language models, focussing on post-trained models. Evalchemy is developed by the DataComp community and Bespoke Labs and builds on the LM-Eval-Harness to provide a unified, easy-to-use platform for language model evaluation. Evalchemy integrates multiple existing benchmarks, such as RepoBench, AlpacaEval, and ZeroEval. We've streamlined the process by:
- Unified Installation: One-step setup for all benchmarks, eliminating dependency conflicts
- Parallel Evaluation:
- Data-Parallel: Distribute evaluations across multiple GPUs for faster results
- Model-Parallel: Handle large models that don't fit on a single GPU
- Simplified Usage: Run any benchmark with a consistent command-line interface
- Results Management:
- Local results tracking with standardized output format
- Optional database integration for systematic tracking
- Leaderboard submission capability (requires database setup)
We suggest using conda (installation instructions).
# Create and activate conda environment
conda create --name evalchemy python=3.10
conda activate evalchemy
# Install dependencies
pip install -e ".[eval]"
pip install -e eval/chat_benchmarks/alpaca_eval
# Log into HuggingFace for datasets and models.
huggingface-cli login
- All tasks from LM Evaluation Harness
- Custom instruction-based tasks (found in
eval/chat_benchmarks/
):- MTBench: Multi-turn dialogue evaluation benchmark
- WildBench: Real-world task evaluation
- RepoBench: Code understanding and repository-level tasks
- MixEval: Comprehensive evaluation across domains
- IFEval: Instruction following capability evaluation
- AlpacaEval: Instruction following evaluation
- HumanEval: Code generation and problem solving
- ZeroEval: Logical reasoning and problem solving
- MBPP: Python programming benchmark
- Arena-Hard-Auto (Coming soon): Automatic evaluation tool for instruction-tuned LLMs
- SWE-Bench (Coming soon): Evaluating large language models on real-world software issues
- SafetyBench (Coming soon): Evaluating the safety of LLMs
- Berkeley Function Calling Leaderboard (Coming soon): Evaluating ability of LLMs to use APIs
Make sure your OPENAI_API_KEY
is set in your environment before running evaluations.
python -m eval.eval \
--model hf \
--tasks HumanEval,mmlu \
--model_args "pretrained=mistralai/Mistral-7B-Instruct-v0.3" \
--batch_size 2 \
--output_path logs
The results will be written out in output_path
. If you have jq
installed, you can view the results easily after evaluation. Example: jq '.results' logs/Qwen__Qwen2.5-7B-Instruct/results_2024-11-17T17-12-28.668908.json
Args:
--model
: Which model type or provider is evaluated (example: hf)--tasks
: Comma-separated list of tasks to be evaluated.--model_args
: Model path and parameters. Comma-separated list of parameters passed to the model constructor. Accepts a string of the format"arg1=val1,arg2=val2,..."
. You can find the list supported arguments here.--batch_size
: Batch size for inference--output_path
: Directory to save evaluation results
Example running multiple benchmarks:
python -m eval.eval \
--model hf \
--tasks MTBench,WildBench,alpaca_eval \
--model_args "pretrained=mistralai/Mistral-7B-Instruct-v0.3" \
--batch_size 2 \
--output_path logs
We add several more command examples in eval/examples
to help you start using Evalchemy.
Through LM-Eval-Harness, we support all HuggingFace models and are currently adding support for all LM-Eval-Harness models, such as OpenAI and VLLM. For more information on such models, please check out the models page.
To choose a model, simply set 'pretrained=' where the model name can either be a HuggingFace model name or a path to a local model.
- Support for vLLM models
- Support for OpenAI
For faster evaluation using data parallelism (recommended):
accelerate launch --num-processes <num-gpus> --num-machines <num-nodes> \
--multi-gpu -m eval.eval \
--model hf \
--tasks MTBench,alpaca_eval \
--model_args 'pretrained=mistralai/Mistral-7B-Instruct-v0.3' \
--batch_size 2 \
--output_path logs
For models that don't fit on a single GPU, use model parallelism:
python -m eval.eval \
--model hf \
--tasks MTBench,alpaca_eval \
--model_args 'pretrained=mistralai/Mistral-7B-Instruct-v0.3,parallelize=True' \
--batch_size 2 \
--output_path logs
π‘ Note: While "auto" batch size is supported, we recommend manually tuning the batch size for optimal performance. The optimal batch size depends on the model size, GPU memory, and the specific benchmark. We used a maximum of 32 and a minimum of 4 (for RepoBench) to evaluate Llama-3-8B-Instruct on 8xH100 GPUs.
Our generated logs include critical information about each evaluation to help inform your experiments. We highlight important items in our generated logs.
-
Model Configuration
model
: Model framework usedmodel_args
: Model arguments for the model frameworkbatch_size
: Size of processing batchesdevice
: Computing device specificationannotator_model
: Model used for annotation ("gpt-4o-mini-2024-07-18")
-
Seed Configuration
random_seed
: General random seednumpy_seed
: NumPy-specific seedtorch_seed
: PyTorch-specific seedfewshot_seed
: Seed for few-shot examples
-
Model Details
model_num_parameters
: Number of model parametersmodel_dtype
: Model data typemodel_revision
: Model versionmodel_sha
: Model commit hash
-
Version Control
git_hash
: Repository commit hashdate
: Unix timestamp of evaluationtransformers_version
: Hugging Face Transformers version
-
Tokenizer Configuration
tokenizer_pad_token
: Padding token detailstokenizer_eos_token
: End of sequence tokentokenizer_bos_token
: Beginning of sequence tokeneot_token_id
: End of text token IDmax_length
: Maximum sequence length
-
Model Settings
model_source
: Model source platformmodel_name
: Full model identifiermodel_name_sanitized
: Sanitized model name for file system usagechat_template
: Conversation templatechat_template_sha
: Template hash
-
Timing Information
start_time
: Evaluation start timestampend_time
: Evaluation end timestamptotal_evaluation_time_seconds
: Total duration
-
Hardware Environment
- PyTorch version and build configuration
- Operating system details
- GPU configuration
- CPU specifications
- CUDA and driver versions
- Relevant library versions
As part of Evalchemy, we want to make swapping in different Language Model Judges for standard benchmarks easy. Currently, we support two judge settings. The first is the default setting, where we use a benchmark's default judge. To activate this, you can either do nothing or pass in
--annotator_model auto
In addition to the default assignments, we support using gpt-4o-mini-2024-07-18 as a judge:
--annotator_model gpt-4o-mini-2024-07-18
We are planning on adding support for different judges in the future!
Evalchemy makes running common benchmarks simple, fast, and versatile! We list the speeds and costs for each benchmark we achieve with Evalchemy for Meta-Llama-3-8B-Instruct on 8xH100 GPUs.
Benchmark | Runtime (8xH100) | Batch Size | Total Tokens | Default Judge Cost ($) | GPT-4o-mini Judge Cost ($) | Notes |
---|---|---|---|---|---|---|
MTBench | 14:00 | 32 | ~196K | 6.40 | 0.05 | |
WildBench | 38:00 | 32 | ~2.2M | 30.00 | 0.43 | Using GPT-4-mini judge |
RepoBench | 46:00 | 4 | - | - | - | Lower batch size due to memory |
MixEval | 13:00 | 32 | ~4-6M | 3.36 | 0.76 | Varies by judge model |
AlpacaEval | 16:00 | 32 | ~936K | 9.40 | 0.14 | |
HumanEval | 4:00 | 32 | - | - | - | No API costs |
IFEval | 1:30 | 32 | - | - | - | No API costs |
ZeroEval | 1:44:00 | 32 | - | - | - | Longest runtime |
MBPP | 6:00 | 32 | - | - | - | No API costs |
MMLU | 7:00 | 32 | - | - | - | No API costs |
ARC | 4:00 | 32 | - | - | - | No API costs |
DROP | 20:00 | 32 | - | - | - | No API costs |
Notes:
- Runtimes measured using 8x H100 GPUs with Meta-Llama-3-8B-Instruct model
- Batch sizes optimized for memory and speed
- API costs vary based on judge model choice
Cost-Saving Tips:
- Use gpt-4o-mini-2024-07-18 judge when possible for significant cost savings
- Adjust batch size based on available memory
- Consider using data-parallel evaluation for faster results
To run ZeroEval benchmarks, you need to:
- Request access to the ZebraLogicBench-private dataset on Hugging Face
- Accept the terms and conditions
- Log in to your Hugging Face account when running evaluations
To add a new evaluation system:
- Create a new directory under
eval/chat_benchmarks/
- Implement
eval_instruct.py
with two required functions:eval_instruct(model)
: Takes an LM Eval Model, returns results dictevaluate(results)
: Takes results dictionary, returns evaluation metrics
Use git subtree to manage external evaluation code:
# Add external repository
git subtree add --prefix=eval/chat_benchmarks/new_eval https://github.com/original/repo.git main --squash
# Pull updates
git subtree pull --prefix=eval/chat_benchmarks/new_eval https://github.com/original/repo.git main --squash
# Push contributions back
git subtree push --prefix=eval/chat_benchmarks/new_eval https://github.com/original/repo.git contribution-branch
To run evaluations in debug mode, add the --debug
flag:
python -m eval.eval \
--model hf \
--tasks MTBench \
--model_args "pretrained=mistralai/Mistral-7B-Instruct-v0.3" \
--batch_size 2 \
--output_path logs \
--debug
This is particularly useful when testing new evaluation implementations, debugging model configurations, verifying dataset access, and testing database connectivity.
- Utilize batch processing for faster evaluation:
all_instances.append(
Instance(
"generate_until",
example,
(
inputs,
{
"max_new_tokens": 1024,
"do_sample": False,
},
),
idx,
)
)
outputs = self.compute(model, all_instances)
- Use the LM-eval logger for consistent logging across evaluations
Evalchemy has been tested on CUDA 12.4. If you run into issues like this: undefined symbol: __nvJitLinkComplete_12_4, version libnvJitLink.so.12
, try updating your CUDA version:
wget https://developer.download.nvidia.com/compute/cuda/repos/debian11/x86_64/cuda-keyring_1.1-1_all.deb
sudo dpkg -i cuda-keyring_1.1-1_all.deb
sudo add-apt-repository contrib
sudo apt-get update
sudo apt-get -y install cuda-toolkit-12-4
To track experiments and evaluations, we support logging results to a PostgreSQL database. Details on the entry schemas and database setup can be found in the database directory.