Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Access to non-persistent data such as acqf functions and the overall model #78

Closed
tatsuya-takakuwa opened this issue Jan 5, 2024 · 31 comments
Labels
new feature New functionality question Further information is requested

Comments

@tatsuya-takakuwa
Copy link

Thank you for the wonderful tool! I have a question about the architecture. Is there a way to add already observed experimental data to a campaign in experiments? Also, is there a way to check the average and variance of the recommended experimental points and exploration space, as well as the evaluation values of the acquisition function in a dataframe?

@tatsuya-takakuwa tatsuya-takakuwa changed the title Researcher already observed experimental data Jan 5, 2024
@Scienfitz Scienfitz added the question Further information is requested label Jan 6, 2024
@Scienfitz
Copy link
Collaborator

Hi @tatsuya-takakuwa

Is there a way to add already observed experimental data to a campaign in experiments?

So I'm assuming you know the very basic example here: https://emdgroup.github.io/baybe/examples/Basics/campaign.html
After you've created your mycampaign = Campaign(...) object, theres nothing that forces you to request recommendations. You might as well simply add your already existing measurements (assuming they match the search space you've created) via mycampaign.add_measurements(your_data_as_pandas_dataframe) and then continue with requesting recommendations. Does that answer your question or are there more subtleties to it?

Also, is there a way to check the average and variance of the recommended experimental points and exploration space,

The result of the recommend call, e.g. rec = mycampaign.recommend(20), is a pandas data frame. You can simply obtain the variances and means via pandas built in functions, for instance like rec['Parameter42'].var() or rec['Parameter1337'].mean().

In a similar fashion, you can acccess the discrete part of the search space in experimental representation via mycampaign.searchspace.discrete.exp_rep, which is also a pandas data frame so the above applies. Note that we do not recommend to meddle with the internal objects of a campaign object, treat them as read-only. If your search space has continuous parts the mean and var arise somewhat trivially from the parameter definition.

as well as the evaluation values of the acquisition function in a dataframe?

The values of the acquisition function are not persistent at the moment in our code. So at this stage that is not possible. We will add this request to the list of properties we are currently considering enabling to return upon request.

@tatsuya-takakuwa
Copy link
Author

@Scienfitz
Thank you for your prompt response.
I apologize for any misunderstanding caused by my question.

What I meant was the following:
The average and variance of recommended experimental points and exploration space
→ How to output the predicted values (average and variance) of the surrogate model for the entire recommended experimental points and exploration space.

I understand that if it's difficult to output the evaluation of the acquisition function, then probably these are also impossible.

In actual use cases, when there is only a small amount of experimental data, these indicators are necessary to explain the purpose of the experiment to the experimenters. I am looking forward to being able to output these in future updates.

@Scienfitz
Copy link
Collaborator

@tatsuya-takakuwa
ah I see, sorry for my confusion.

But in that case indeed its the same problem as for the acq function values, they are not stored persistently at the moment

@AdrianSosic do you see a hacky way of extracting whats been requested here? Otherwise I think it might be time for us to look into these extractables as a new feature. I think one potentially elegant way could be that the recommend function has optional arguments that serve as sort of callbacks or variables in which the requested additional info is stored, after all mean and covar are already calcualted.

@Scienfitz Scienfitz added the new feature New functionality label Jan 7, 2024
@AdrianSosic
Copy link
Collaborator

Hi @tatsuya-takakuwa 👋. Fully agree with what @Scienfitz wrote. Currently, the surrogate model and acquisition function are thrown away after the computation, so I think there is no straightforward way to access this information at the moment. However, making this type of diagnostic information available is definitely on the roadmap! Currently, we are focusing on improving the docs (the package went public just a months ago) but I guess we can tackle your request after that 👍🏼

@tatsuya-takakuwa
Copy link
Author

@Scienfitz @AdrianSosic
Thank you for your response. I appreciate your consideration regarding the addition of diagnostic features. I'm looking forward to it.

@AVHopp
Copy link
Collaborator

AVHopp commented Mar 6, 2024

Hey @tatsuya-takakuwa just as a quick update here: Starting Mid April, we plan to include ways of "making information that is obtained during calculation available". I am staying vague here on purpose since we still need to discuss how exactly this will be included and when it is available, but the current plan is to include this in the way @Scienfitz described it on Jan 7. So your issue has not been forgotten and is on the roadmap 😄

@brandon-holt
Copy link
Contributor

brandon-holt commented Mar 27, 2024

Hi, registering my interest in this feature as well (original thread #184).

@Scienfitz With respect to injecting our own code into acquisition.py as a temporary workaround, would you mind pointing me in the right direction?

I suspect the information I'm looking for is somewhere in the posterior method of the AdapterModel class, but I'm not sure exactly where to go from there (if that even is the right starting point haha).

@zhensongds
Copy link

Registering my interest too on this topic. We also need surrogate model and acquisition function with recommended candidates.

@Scienfitz Scienfitz changed the title already observed experimental data Access to non-persistent data such as acqf functions and the overall model Apr 2, 2024
@Scienfitz
Copy link
Collaborator

I renamed this Issue to reflect the open part of the original Issue.

Old name already observed experimental data

Thank you @zhensongds and @brandon-holt for following the suggestion and registering your interest here. We will let you know asap when there are news on this topic which is now actively starting to get implemented.

@AVHopp @RimRihana for your reference I intend this open issue as collection for everyone expressing interest in whats the supposed outcome of Rims project

@brandon-holt
Copy link
Contributor

@Scienfitz Hey! Just curious if there's an updated ETA on the release of this feature?

@AdrianSosic
Copy link
Collaborator

Hi @brandon-holt, the feature is already in sight: once #220 is merged, the infrastructure that will allow us to expose the model/acqf to the users is in place. From there on, it's only one PR to flesh out the interface. Normally, I'd say two weeks from now, but currently it's bank holiday season here in Germany which will slow us down quite a bit. So perhaps end of the month, if no unexpected problems arise. But you can follow the progress in the mentioned PR 👌

There is also already a second workstream that will allow users to lurk into baybe code flow via custom hooks, but that is a little further down the road.

@brandon-holt
Copy link
Contributor

@AdrianSosic now that it looks like #220 has been merged, does this mean the feature is now accessible? Would it be possible to have a quick example of how to expose the model and generate predictions of target variables, assess accuracy, etc?

@AdrianSosic
Copy link
Collaborator

😄 seems someone was watching me merge the PR 😄

We are one step closer but unfortunately not yet there. There is one last refactoring needed, namely the one that let's us use our surrogates with the experimental (instead of computational) representation of the data. However, I can try to draft the mechanism using an intermediate solution on a separate branch, which could also serve as a test to verify that the intended final layout actually works. Give me a moment to think about it, will contact you again here ...

@AdrianSosic
Copy link
Collaborator

AdrianSosic commented Jun 4, 2024

@brandon-holt 👋🏼 I've quickly drafted a demo for the surrogate model access (based on a temporary solution) that you'll find in the demo/model_access branch. While this is already fully functional in that it gives you access to the posterior mean and covariance of the surrogate model, please note that this is really only a draft and many things still to be fleshed out and will be changed in the later version. For instance:

  • You don't get the actual surrogate but currently just a callable that mimics the posterior function of the surrogate
  • There is no access to the callable on the recommender protocol level but only for Bayesian recommenders, hence the rather cumbersome access via campaign.recommender.recommender.get_surrogate()
  • You need to explicitly call recommend first in order to trigger model training
  • No error handling whatsoever
  • ...

But let me know if this already helps. Similarly, we can provide access to the acquisition function.

Also @Scienfitz, please have look and let me know what you think. I guess you can imagine what the final solution would look like once fully integrated with the refactored surrogates ...

@brandon-holt
Copy link
Contributor

brandon-holt commented Jun 4, 2024

@AdrianSosic amazing thank you!!! sorry would you mind pointing me to a link with the demo/example code?

Is it somewhere near here?
https://github.com/emdgroup/baybe/tree/demo/model_access/examples

@AdrianSosic
Copy link
Collaborator

No, I just placed it directly into the package root, that is, baybe/demo.py (this branch is just for demo purposes)

@Scienfitz
Copy link
Collaborator

@brandon-holt @zhensongds @tatsuya-takakuwa

I just wanted to inform you that the principal functionality that enables the requested access to internal/non-persistent info requested in this thread is now on main. It was merged in #275 and #276

Keep in mind this is not released yet

  • It's not documented yet, there is however one simple example here illustrating the concept
  • We plan to provide examples and a userguide for it

We can enable some exciting new features with that. Work in Progress

  • Auto-stopping unpromising campaigns
  • Access to the model and feature importance
  • Outlier detection

@AdrianSosic
Copy link
Collaborator

Hi @Scienfitz, thanks for informing the people in this issue ✌🏼 Still, to avoid confusion about how this relates to the original request, I'd like to add:

What @Scienfitz describes here is a mechanism that allows accessing internals by configuring custom hooks that can be attached to callables. It has the advantage that it is completely generic and allows to inspect arbitrary objects generated at runtime. However, it also means that you need to write these hooks yourself.

So, in addition to the above, we'll be offering two more things soon:

  • A "library" of hooks for common tasks (like the ones that @Scienfitz mentioned above)
  • A built-in (!) way to access the trained surrogate model and acquisition function, that does not require setting up hooks. Currently still work in progress (e.g. Comp Rep Transition Point #278 as another step in that direction)

@brandon-holt
Copy link
Contributor

@AdrianSosic How would I install the branch that includes the model access feature?

@AdrianSosic
Copy link
Collaborator

Hi @brandon-holt, which one exactly are we talking about? The temporary demo/model_access branch that I created as a preview for you here? You should be able to that via

pip install git+https://github.com/emdgroup/baybe.git@demo/model_access

@brandon-holt
Copy link
Contributor

brandon-holt commented Jul 12, 2024

@AdrianSosic So for this example here, there is a line

# Recommend to train surrogate
campaign.recommend(3)

Can you explain the idea here in more detail? Isn't the step where the measurements are added what actually fits the model? How does just the process of calling the acquisition function do anything to the underlying model/GP? Especially without adding measurements for those recommended experiments? That step seems unecessary to me, so I must be missing an important concept.

Also, does the number of recommended experiments matter? Can you do 1, 100, 1000?

Edit: I do see that without this line I get the error AttributeError: 'NoneType' object has no attribute 'transform, which leads me to believe the recommend command is what triggers the creation of the surrogate._searchspace from this section of the code in base.py:

def get_surrogate(
self,
) -> Callable[[pd.DataFrame], tuple[Tensor, Tensor]]:  # noqa: D102
    def behaves_like_surrogate(exp_rep: pd.DataFrame, /) -> tuple[Tensor, Tensor]:
        comp_rep = self._searchspace.transform(exp_rep)
        return self.surrogate_model.posterior(to_tensor(comp_rep))

Perhaps my misunderstanding is around the difference between the underlying GP model and the concept of the surrogate. Are these different names for the same thing or is there a distinction?

Thanks in advance!

@AdrianSosic
Copy link
Collaborator

Hi @brandon-holt. Just upfront, to make sure we're on the same page: Everything on that branch was just a quick hacky workaround to give you access to the posterior mean/variance within the current code layout. Nothing you see here is set in stone and, in fact, the implementation of the final solution will look completely different, giving you much more flexibility in terms of model access, which will also work for future extension that are already on the way (e.g. non-Gaussian models, etc). There is already an entire dev branch (dev/surrogates) on which I am very actively working at the moment. So please ignore the internal details completely and the ugly workaround of the recommend call.

That said, to answer your questions:

  • Yes, in the hacky demo, the recommend call is currently needed since it triggers the creation of the surrogate model in the first place. The reason is that, in contrast to your conjecture, the model is not trained upon adding new data but lazily when requesting recommendations. (<- this will be adjusted in the new version) But for now: it doesn't really matter how many recommendations you request when your only goal is to access the surrogate – just use any number. However, if you are interested in the recommendations themselves, you should definitely put exactly the number you desire (see note on "batch optimization" in our campaign userguide)
  • The GP is a particular kind of a surrogate model, which we use as default (it is the de facto standard working horse for Bayesian optimization for several reasons). But you can switch to other model types if needed. The emphasis here is on "if needed" though, i.e. I would not switch unless I have a very specific reason to do so.

Hope that helps! If you have further questions, let me know. My hope is that I can finish the surrogate refactoring (whose scope became much larger than initially anticipated) within the next two weeks. Once done, it'll give you very easy access to all sorts of posterior distributions of any model kind (Bandit model, which are currently also in the pipeline)

@brandon-holt
Copy link
Contributor

brandon-holt commented Jul 17, 2024

@AdrianSosic This all makes sense, thank you for explaining! On the topic of the next release, do you have an expected timeline for when feature importance analysis will be available?

Also, do you have any idea why the mean values produced by the surrogate are all negative? The absolute values are in the correct range for the dataset (~1.8-2.6), but are all negative. Here is a scatter plot

image

When I multiply the predicted values by -1, the r2 score is ~0.4, suggesting a relatively decent prediction on the test dataset. Do you know why this is happening? Would it be appropriate to take the negative of the mean values generated by the surrogate?

@AdrianSosic
Copy link
Collaborator

@brandon-holt No clear timeline for that, unfortunately, but once the surrogate refactoring is completed, it will be very easy to extract the importances at least for GP models without any additional machinery. So even if unclear when a clean API for that will be made available, I could very quickly hack a temporary solution for you similar to the one above. In fact, you could even do it yourself based on the above hack if you feel confident to dig down into the gpytorch models. All you need to do is to access the kernel of the trained GP inside the surrogate model and extract its lengthscales. This will already provide you the raw importance score. Bringing this into a production ready state is then more about putting the necessary bits and pieces around, i.e. aggregating these score from the computational level back onto the experimental level and properly normalizing them. Unfortunately, I'm too packed to do it for you right now, because we urgently need to get the next release out before the Acceleration Conference in August. But after that, I could look into it 👍🏼

Regarding the "negative" issue: Yes, I think I know quite exactly what is the problem. Let me guess: you used a "MIN" mode for your target variable, right? In my hack, I simply didn't not properly take into account that the surrogate values need to be inverted again for that case. So you can simply do the inversion yourself in this case. In the final solution, none of these problems will remain of course (hopefully :D)

@brandon-holt
Copy link
Contributor

brandon-holt commented Jul 18, 2024

@AdrianSosic wonderful, this clears everything up! Thanks for the detailed explanation :)

And yes you are correct this was a MIN mode for the target variable!

@brandon-holt
Copy link
Contributor

brandon-holt commented Jul 31, 2024

@AdrianSosic Heyo, hope you have all been doing well! Just checking in on what's the latest on timelines for an official release for this feature, as well as the feature importance analysis component?

On the feature analysis side, would there be any way this would be compatible with a package like or similar to shap?

@AdrianSosic
Copy link
Collaborator

Hi @brandon-holt, thanks for maintaining your interest, despite the long-lasting changes happening 👏🏼 I can image it may feel like not much is happening at the moment because most of the changes are still behind the scenes. But trust me: a lot of things are currently in progress 🙃

To give a brief update with a rough roadmap:

  • Tomorrow, we'll release version 0.10.0 (if nothing unexpected happens). As you can see here, this already contains a very long list of changes, but the most important ones (surrogate refactoring, hook mechanism, polars extension) are not even in there yet since the merges of the dev branches still needs to happen (today/tomorrow)
  • Once this is done, we'll have a complete new interface available for the surrogates and their posterior, which will enable all the new fancy features, including those requested in this issue.
  • All that is missing (and I'll start working on it next week) is the public layer to access these features.

This brings me to an important point: Since I haven't fleshed out the details on how the interface for the requested features will look like, it would be great if you could share your precise wishlist. That is, how exactly would you ideally want to access/evaluate the model internals, such as the posterior predictions, acquisition values, feature importances. Could you share some function/method calls how you envision them?

Of course, I cannot guarantee that the final layout will exactly match with your expectation, but it'll definitely be helpful to get some user feedback on the API.

@AVHopp
Copy link
Collaborator

AVHopp commented Aug 20, 2024

@brandon-holt since the new version is released now, did you have a chance to have a look?

@AdrianSosic
Copy link
Collaborator

Hey @AVHopp, the new version has nothing to do with the features requested in this issue, but we're close.

@brandon-holt: The dev-branch will be merged next week (waiting for @Scienfitz to come back from vacation and doing a final review). After that, the surrogate predictions and acquisition values are generally available. Also, the problem with the "inverted" predictions you mentioned here is already addressed in #340, which I'll also merge next week. After that, you basically have access to the model internals and can already start working with it, by manually accessing the posterior method of the surrogate object sitting in your recommender.

Will now think about what is a reasonable high-level interface to provide convenient user access to it. Wishes/ideas still welcome 🙃

@Scienfitz
Copy link
Collaborator

@nipunupta0092 @brandon-holt @tatsuya-takakuwa @zhensongds doing a round of tagging

  • short term
    Once Expose surrogate #355 is merged you can follow the simple recipe outlined here to do feature importance or anything else involving the surrogate model

  • long term:
    This capability will be further consolidated into a new diagnostics sub-package for easier access, discussed in Upcoming Insights Package #357

@Scienfitz
Copy link
Collaborator

Closing this Issue as #355 has made the surrogate available
Further development on using this for diagnostics to be done in #357

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
new feature New functionality question Further information is requested
Projects
None yet
Development

No branches or pull requests

6 participants