-
Dear Shangeth, thank you very much for your repository, it has been extremely useful and I have learned a lot from it. I have no experience with large pre-trained models but I am interested in trying wav2vec 2.0 as an encoder, and especially wav2vec 2.0 XLSR-53 which has been trained with multiple languages. Is there any short-term plan to include these models in the repository? |
Beta Was this translation helpful? Give feedback.
Answered by
shangeth
May 1, 2021
Replies: 1 comment 3 replies
-
Hi @yogso , |
Beta Was this translation helpful? Give feedback.
3 replies
Answer selected by
yogso
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hi @yogso ,
If you want it ASAP, they check the hugging face and fairseq repositories for wav2vec2.0 models. Yes, I am planning to add the wav2vec2.0 encoder soon.