Replies: 3 comments 1 reply
-
Could you provide a minimum reproducible example or console outputs/logs, etc? This would help us reproduce the issue |
Beta Was this translation helpful? Give feedback.
-
Yeah sure thank you, this is the basic code I'm running in v1.0.1 of YT. The only changes I have made is to hardcode the link to the GameModeConfig file and to fix issue 10 "Incorrect reward for blue agent reaching max_steps". This is the same setup I had for v0.1.0 which produced reasonable looking results.
Appreciate any help. |
Beta Was this translation helpful? Give feedback.
-
Hello, we have not heard from you in a while. |
Beta Was this translation helpful? Give feedback.
-
Hello all, I have upgraded from a prior version of Yawning titan using the 18-node and a custom 10 node network to train on a default game mode that had been edited to 100 steps instead of 1000. The other change made was to fix the "rewards_are_muiltipled_by_end_state". Some of my training results are shown below.
Upon upgrading to the v1.0.1 release and attempting to run the same scenarios I the below results occurred. As you can see the training is not going well and getting stuck fairly consistently at the same place.
As far as I can tell all of the settings in the game/agent/network are the same. Adjusted to 100 steps, same network, reward multiplied by end state and running with in a conda environment with the required packages.
I'm not sure what I might have missed that's making the training perform poorly and would appreciate any advice you might have. Are there any settings that might have changed that I have missed?
Beta Was this translation helpful? Give feedback.
All reactions