-
Notifications
You must be signed in to change notification settings - Fork 312
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
EHVI & NEHVI break with more than 7 objectives #2387
Comments
This reproduces. Thanks for reporting. Weird bug! |
One thing to note is that HV-based acquisition functions generally don't scale well to problems with many objectives. 2-3 is generally fine, for 4+ you'll likely see a pretty substantial slowdown and/or memory explosion because of how complex the box decompositions of the Pareto set become. In cases with 8 objective such as yours you'll likely want to either express some of the objectives as constraints instead (if that's possible), drop them from the optimization, or use a different acquisition function such as qParEGO (which will scale better but is less sample efficient). |
I started looking into this, and the bug seems to stem from hypervolume computations starting to use zero cells once m>7, because this check for Pareto dominance always evaluates to |
cc @sdaulton |
Hi, I wanted to ask if there are any updates on this issue. Cheers! |
I'm afraid I haven't made any progress on this, but it remains a bug we want to understand. |
@esantorella, @Balandat, my sense is that we may want to validate against too many objectives in (n)EHVI and simply not allow this behavior, as users are likely best served by a) converting some of their objectives into constraints or b) using parEGO if they do indeed have this many objectives, is that right? |
That is correct @lena-kashtelyan , people should not be using EHVI-based methods for 7+ dimensional objectives. I am not sure what the default approximation values are for the approximate HV computation, but if we are sufficiently aggressive (zeta=1e-3) at higher dimensions (say M=4 or M=5) then it could be reasonably fast relative to ParEGO (see p29 of https://arxiv.org/pdf/2006.05078). I would recommend making sure that we kick into more aggressive approximation for higher dimensionalities, and for anything 6 or higher default to ParEGO and throw a warning. @schmoelder can you tell us a little bit more about your use case? MOO tends to be less useful and sample efficient when you have so many objective, since the area of the frontier increases exponentially with the objectives and ultimately people are interested in just a few "good" tradeoffs. Many people have legit reasons for wanting to optimize so many objectives, and we've developed for using preference-based feedback to do the search more efficiently than multi-objective Bayesian optimization (paper @ https://arxiv.org/pdf/2203.11382, code @ https://botorch.org/tutorials/bope) |
Hello Ax Team,
when running EHVI or NEHVI with more than 7 objectives, we get an error during the evaluation of the objective function.
Here's an MRE:
and here's the full traceback:
The text was updated successfully, but these errors were encountered: