You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I tried to run the shallow IMPA,LA model on dmlab30 language model, but besides language_select_described_object, both language_select_located_object and language_execute_random_task are stuck in the 0 reward. I'm not sure if there's any hyper parameters needed to tune or more frames are needed to train.
I only changes the larger model to the shallow model in the source code. Number of actors is 48, batch size is 32, all other parameters follow the default value.
The text was updated successfully, but these errors were encountered:
Hi, I tried to run the shallow IMPA,LA model on dmlab30 language model, but besides language_select_described_object, both language_select_located_object and language_execute_random_task are stuck in the 0 reward. I'm not sure if there's any hyper parameters needed to tune or more frames are needed to train.
I only changes the larger model to the shallow model in the source code. Number of actors is 48, batch size is 32, all other parameters follow the default value.
The text was updated successfully, but these errors were encountered: