Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Decode more than one molecule from one latent #73

Open
cankobanz opened this issue Feb 21, 2024 · 1 comment
Open

Decode more than one molecule from one latent #73

cankobanz opened this issue Feb 21, 2024 · 1 comment
Assignees
Labels
question Request for help or information

Comments

@cankobanz
Copy link

Hello, thank you for sharing your work.

I have a question regarding the decoding process. I'm curious about how to decode more than one molecule from a specific latent space, especially those that are neighbors with high log likelihood relative to the returned molecule.

Currently, it seems that for the same latent input, the decoded output remains unchanged, and the sampling process doesn't seem to support starting from a specified latent position:

def sample(self, num_samples: int) -> List[str]:
"""Sample SMILES strings from the model.
Args:
num_samples: Number of samples to return.
Returns:
List of SMILES strings.
"""
return self.decode(self.sample_latents(num_samples))

I was considering customizing the sample method of the GeneratorWrapper to initiate from a specific latent point instead of starting from zeros. However, the provided checkpoint (GNN_Edge_MLP_MoLeR__2022-02-24_07-16-23_best.pkl) is configured for the VaeWrapper, not the GeneratorWrapper.

Note: I have taken into consideration your suggestion to add small noise to address this issue, as discussed in issue 40. However, my primary interest lies in exploring a more refined solution, specifically through adjusting the num_samples parameter here:

num_samples = min(num_samples, num_choices) # Handle cases where we only have few candidates
if sampling_mode == DecoderSamplingMode.GREEDY:
# Note that this will return the top num_samples indices, but not in order:
picked_indices = np.argpartition(logprobs, -num_samples)[-num_samples:]
elif sampling_mode == DecoderSamplingMode.SAMPLING:
p = np.exp(logprobs) # Convert to probabilities
# We can only sample values with non-zero probabilities
num_choices = np.sum(p > 0)
num_samples = min(num_samples, num_choices)

Thank you in advance for your assistance.

@kmaziarz
Copy link
Collaborator

Sorry, I somehow missed your question!

I was considering customizing the sample method of the GeneratorWrapper to initiate from a specific latent point instead of starting from zeros.

Given that a generator-style model was trained always receiving an all-zeros input, this may not work. I would rather use the vae-style model which has been more thoroughly validated.

As you said, you could either go with perturbing the latent code, or make decoding randomized so that it can return different results from a single latent code. The latter option is not exposed in the model wrapper, but you could modify the sampling_mode argument in MoLeRInferenceServer directly (see here). However, I'm not sure if this would give better results than the perturbation-based approach; it would be best to try both and see which one works better empirically for your usecase.

@kmaziarz kmaziarz self-assigned this Aug 22, 2024
@kmaziarz kmaziarz added the question Request for help or information label Aug 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Request for help or information
Projects
None yet
Development

No branches or pull requests

2 participants