-
Notifications
You must be signed in to change notification settings - Fork 38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error when loading en+hi model #7
Comments
Sorry for late response, but can you please try again by redownloading? |
Hey @GokulNC I facing the same issue. Do you know the probable cause? Traceback - python3 -m TTS.bin.synthesize --text "Namaste! How are you? Kal milte hai" --model_path "models/v1/hin/fastpitch/best_model.pth" --config_path "models/v1/hin/fastpitch/config.json" --vocoder_path "models/v1/hin/hifigan/best_model.pth" --vocoder_config_path "models/v1/hin/hifigan/config.json" --speaker_idx male --out_path "temp.wav" --use_cuda true
> Using model: fast_pitch
> Setting up Audio Processor...
| > sample_rate:22050
| > resample:False
| > num_mels:80
| > log_func:np.log
| > min_level_db:-100
| > frame_shift_ms:None
| > frame_length_ms:None
| > ref_level_db:20
| > fft_size:1024
| > power:1.5
| > preemphasis:0.0
| > griffin_lim_iters:60
| > signal_norm:False
| > symmetric_norm:True
| > mel_fmin:0
| > mel_fmax:8000.0
| > pitch_fmin:0.0
| > pitch_fmax:640.0
| > spec_gain:1.0
| > stft_pad_mode:reflect
| > max_norm:4.0
| > clip_norm:True
| > do_trim_silence:True
| > trim_db:60
| > do_sound_norm:False
| > do_amp_to_db_linear:True
| > do_amp_to_db_mel:True
| > do_rms_norm:False
| > db_level:None
| > stats_path:None
| > base:2.718281828459045
| > hop_length:256
| > win_length:1024
> Init speaker_embedding layer.
Traceback (most recent call last):
File "/usr/local/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/local/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/usr/ai-inference/TTS/TTS/bin/synthesize.py", line 425, in <module>
main()
File "/usr/ai-inference/TTS/TTS/bin/synthesize.py", line 322, in main
synthesizer = Synthesizer(
File "/usr/ai-inference/TTS/TTS/utils/synthesizer.py", line 78, in __init__
self._load_tts(tts_checkpoint, tts_config_path, use_cuda)
File "/usr/ai-inference/TTS/TTS/utils/synthesizer.py", line 120, in _load_tts
self.tts_model.load_checkpoint(self.tts_config, tts_checkpoint, eval=True)
File "/usr/ai-inference/TTS/TTS/tts/models/forward_tts.py", line 839, in load_checkpoint
self.load_state_dict(state["model"])
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2153, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for ForwardTTS:
size mismatch for emb_g.weight: copying a param with shape torch.Size([4, 512]) from checkpoint, the shape in current model is torch.Size([2, 512]). |
@tanmayh-fg were you able to solve this? |
I am facing same issues. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I am getting this error when trying to load the newly uploaded en+hi model
Any help would be appreciated. @GokulNC @ashwin-014
The text was updated successfully, but these errors were encountered: