You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I compared two experimental data setups.
setting 1: WenetSpeech(Chinese)only
setting 2: Wenet + Giga (about 1:1, Chinese + English)
It's interesting that training on setting 1 can't decrease normally (blue curve in the following image), while setting 2 mixed with English can converge normally.
Have you observed this phenomenon in your experiments?
The text was updated successfully, but these errors were encountered:
I compared two experimental data setups. setting 1: WenetSpeech(Chinese)only setting 2: Wenet + Giga (about 1:1, Chinese + English)
It's interesting that training on setting 1 can't decrease normally (blue curve in the following image), while setting 2 mixed with English can converge normally. Have you observed this phenomenon in your experiments?
This situation is somewhat unusual. You may use a small amount of Chinese data (approximately 500 hours) to verify whether this issue always arises when the model is trained on purely Chinese data.
I compared two experimental data setups.
setting 1: WenetSpeech(Chinese)only
setting 2: Wenet + Giga (about 1:1, Chinese + English)
It's interesting that training on setting 1 can't decrease normally (blue curve in the following image), while setting 2 mixed with English can converge normally.
Have you observed this phenomenon in your experiments?
The text was updated successfully, but these errors were encountered: