You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When I tried your idea, I found that text tokenizer in LLaVA-1.5-7B can only handle maximum sequence length of 77 tokens. However, some datasets including textvqa and mm-vet can exceed this limit. How do you deal with this problem?
The text was updated successfully, but these errors were encountered:
When I tried your idea, I found that text tokenizer in LLaVA-1.5-7B can only handle maximum sequence length of 77 tokens. However, some datasets including textvqa and mm-vet can exceed this limit. How do you deal with this problem?
The text was updated successfully, but these errors were encountered: