Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

why do we need set causal_mask =True in clip #13

Open
liuxiaoqun opened this issue Mar 12, 2024 · 2 comments
Open

why do we need set causal_mask =True in clip #13

liuxiaoqun opened this issue Mar 12, 2024 · 2 comments

Comments

@liuxiaoqun
Copy link

prompt is a sentence ,we don't need to predict next token in prompt, is there a question to see the right tokens?
x = self.attention(x, causal_mask=True)

@RohollahHS
Copy link

This my question as well.

@ninaddaithankar
Copy link

I had the same query. This is the answer I found in the CLIP paper by OpenAI:

"Masked self-attention was used in the text encoder to preserve the ability to initialize with a pre-trained language model or add language modeling as an auxiliary objective, though exploration of this is left as future work."

Paper: https://arxiv.org/pdf/2103.00020

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants