- [2025-01-14] Released EvoX 1.0.0 ๐ - now fully compatible with PyTorch! Users of the previous JAX-based version can access it on the v0.9.0 branch.
EvoX is a distributed GPU-accelerated evolutionary computation framework compatible with PyTorch*. With a user-friendly programming model, it offers a comprehensive suite of 50+ Evolutionary Algorithms (EAs) and a wide range of 100+ Benchmark Problems/Environments. For more details, please refer to our Paper and Documentation / ๆๆกฃ.
*Users of the previous JAX-based version can access it on the v0.9.0 branch.
- Supports acceleration on heterogeneous hardware, including both CPUs and GPUs, achieving over 100x speedups.
- Integrates distributed workflows that scale seamlessly across multiple nodes or devices.
- Includes 50+ algorithms for a wide range of use cases, fully supporting single- and multi-objective optimization.
- Provides a hierarchical architecture for complex tasks such as meta learning, hyperparameter optimization, and neuroevolution.
- Fully compatible with PyTorch and its ecosystem, simplifying algorithmic development with a tailored programming model.
- Ensures effortless setup with one-click installation for Windows users.
- Features 100+ benchmark problems spanning single-objective optimization, multi-objective optimization, and real-world engineering challenges.
- Integrates seamlessly with physics engines like Brax and other popular frameworks for reinforcement learning.
- Provides an encapsulated module for defining and evaluating custom problems tailored to user needs, with seamless integration into real-world applications and datasets.
- Offers a comprehensive set of visualization tools for analyzing evolutionary processes across various tasks.
- Enables users to integrate their own visualization code, allowing for tailored and flexible visualizations.
- Leverages the tailored .exv format to simplify and accelerate real-time data streaming.
Category | Algorithms |
---|---|
Differential Evolution | CoDE, JaDE, SaDE, SHADE, IMODE, ... |
Evolution Strategy | CMA-ES, PGPE, OpenES, CR-FM-NES, xNES, ... |
Particle Swarm Optimization | FIPS, CSO, CPSO, CLPSO, SL-PSO, ... |
Category | Algorithms |
---|---|
Dominance-based | NSGA-II, NSGA-III, SPEA2, BiGE, KnEA, ... |
Decomposition-based | MOEA/D, RVEA, t-DEA, MOEAD-M2M, EAG-MOEAD, ... |
Indicator-based | IBEA, HypE, SRA, MaOEA-IGD, AR-MOEA, ... |
Category | Problems/Environments |
---|---|
Numerical | DTLZ, LSMOP, MaF, ZDT, CEC'22, ... |
Neuroevolution/RL | Brax, TorchVision Dataset, ... |
For a comprehensive list and detailed descriptions of all algorithms, please check the Algorithms API, and for benchmark problems/environments, refer to the Problems API.
Install evox
effortlessly via pip
:
pip install evox
Note: Windows users can use the win-install.bat script for installation.
- TensorNEAT: Tensorized NeuroEvolution of Augmenting Topologies (NEAT) for GPU Acceleration. Check out here.
- TensorRVEA: Tensorized Reference Vector Guided Evolutionary Algorithm (RVEA) for GPU Acceleration. Check out here.
- TensorACO: Tensorized Ant Colony Optimization (ACO) for GPU Acceleration. Check out here.
- EvoXBench: A real-world benchmark platform for solving various optimization problems, such as Neural Architecture Search (NAS). It operates without the need for GPUs/PyTorch/TensorFlow and supports multiple programming environments. Check out here.
Stay tuned - more exciting developments are on the way! โจ
- Join discussions on the GitHub Discussion Board.
- Connect via Discord or QQ group (ID: 297969717).
- Help translate EvoX docs on Weblate. We currently support translations in two languages, English / ไธญๆ.
If EvoX contributes to your research, please cite it:
@article{evox,
title = {{EvoX}: {A} {Distributed} {GPU}-accelerated {Framework} for {Scalable} {Evolutionary} {Computation}},
author = {Huang, Beichen and Cheng, Ran and Li, Zhuozhao and Jin, Yaochu and Tan, Kay Chen},
journal = {IEEE Transactions on Evolutionary Computation},
year = 2024,
doi = {10.1109/TEVC.2024.3388550}
}