You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I am trying to test more benchmark functions. But I find that the returned dict for each test function including information of so called true setting of hyperparameters of GP models, e.g., RBF.variance, RBF.lengthscale. I am not sure how can I get these information and curious about why these information is acquired?
The text was updated successfully, but these errors were encountered:
Hi, we assume that the true hyperparameters are known so they are provided in the benchmark functions. To obtain these hyperparameters, we randomly draw a number of function evaluations and fit a GP on this dataset. Then, we treat these hyperparameters as the ground-truth hyperparameters of the function. In practice, when the hyperparameters are unknown, the hyperparameters can be obtained by fitting a GP on observations obtained in the previous BO iterations. This often leads to worse performance at the beginning of BO when the number of observations is small (so the estimated hyperparameters are not good).
Thank you for the explaination. Though I have guessed the right way to get these hyperparameters, I am still confused about their usage. After acquiring the 'ground truth' of these hyperparameters, where did they applied, and what they are applied for? I agree with your opinion that these hyperparameters significantly influence the performance. But most Bayesian Opimitzation methods do not require these information, they commonly obtain the hyperparameters based on sampled data via maximizing the log-likelihood or MCMC sampling. And I understand their way to acquire the hyperparameters and where are they used (in the posterior distribution of GP). But I am not sure about the usage about the 'ground truth' of these hyperparameters. Looking forward to more explainations, thank you.
Hi, I am trying to test more benchmark functions. But I find that the returned dict for each test function including information of so called true setting of hyperparameters of GP models, e.g., RBF.variance, RBF.lengthscale. I am not sure how can I get these information and curious about why these information is acquired?
The text was updated successfully, but these errors were encountered: