Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot run launch_experiments.py #22

Open
xlws1 opened this issue Oct 24, 2024 · 5 comments
Open

Cannot run launch_experiments.py #22

xlws1 opened this issue Oct 24, 2024 · 5 comments

Comments

@xlws1
Copy link

xlws1 commented Oct 24, 2024

Hello, I'm a first-year graduate student. When running the launch_experiments.py file in the multiagent_particles/experiments folder, I encountered the following error:
WARNING:tensorflow:
The TensorFlow contrib module will not be included in TensorFlow 2.0.
For more information, please see:

  • https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md
  • https://github.com/tensorflow/addons
  • https://github.com/tensorflow/io (for I/O related ops)
    If you depend on functionality not listed there, please file an issue.
    Then I noticed an error in the fourth line of the def save_goal_agents(N, folder_name, agent_speeds): function:
    exp_name = f"{folder_name}/{run}{behaviour}{str(speeds)}{n}"
    The error message indicated unresolved references to behaviour, speeds, and n.
    After modifying the code to:
    python
    def save_goal_agents(N, folder_name, agent_speeds):
    runs = ["run_10", "run_11", "run_12", "run_13", "run_14"]
    missing_agents_behaviours = ["idle", "random_player", "random"] # Ensure this list includes all required behaviors
    for n in range(N):
    for run in runs:
    for behaviour in missing_agents_behaviours:
    for speeds in agent_speeds:
    # Convert each element in the speeds list to a string and separate them with commas
    speeds_str = ','.join(map(str, speeds))
    exp_name = f"{folder_name}/{run}
    {behaviour}{speeds_str}{n}"
    fname = f"{exp_name}.csv"
    if not os.path.exists(fname):
    command = f'python run.py --load-dir "saves/{run}/episode_200000/model" --exp-name {exp_name} --save-dir {folder_name} --rollout --num-episodes {N} --agent-speeds'
    for speed in speeds:
    command += f" {speed}"
    subprocess.run(command, shell=True)
    There are no more error messages, but I still receive the warning:
    WARNING:tensorflow:
    The TensorFlow contrib module will not be included in TensorFlow 2.0.
    For more information, please see:
  • https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md
  • https://github.com/tensorflow/addons
  • https://github.com/tensorflow/io (for I/O related ops)
    If you depend on functionality not listed there, please file an issue.
@Fabien-Couthouis
Copy link
Owner

Hi!
The tensorflow version used in this work is tensorflow==1.15.0, please refer to the conda environment.yml file available here: https://github.com/Fabien-Couthouis/XAI-in-RL/blob/master/multiagent_particles/environment.yml for packages version.

Hope it helps. Tell me if you have other issues.

@xlws1
Copy link
Author

xlws1 commented Oct 24, 2024

Thank you very much for your reply, I directly used the command conda env create -n mparticles environment.yml to import the environment.yml file you provided to create the experimental environment for multi-agent particles. I believe the environment should not have any issues (Figure 1).

I used: python run.py --load-dir "saves/run_10/episode_200000/model" --missing-agents-behaviour "idle" --exp-name run_10_idle_0 --save-dir rewards2-ht/1 --shapley-M 1000 --num-episodes 1 to reproduce your code by adjusting the parameters --load-dir and --missing-agents-behaviour.

After saving the experimental results to the folder I created, /rewards2-ht/1, I wanted to plot the results using the command python plots.py rewards --plot_type your_plot_type.

When I entered python plots.py --result_dir rewards2-ht/1 --plot-type model_rewards --model-dir saves/run_10/, the resulting graph did not match the one you provided (Figure 2).

Why did the rewards\exp2 path appear? I suspect that I modified the parameters in the command line, but the code did not execute as expected.1
2

@Fabien-Couthouis
Copy link
Owner

Fabien-Couthouis commented Oct 26, 2024

If you want to reproduce the shapley values results, use the launch_experiment script:

python launch_experiments.py

@xlws1
Copy link
Author

xlws1 commented Oct 29, 2024

I recreated the virtual environment and then ran the launch.experiment script. A new error has occurred:Traceback (most recent call last):
File "E:\project\XAI-in-RL-master\multiagent_particles\experiments\launch_experiments.py", line 4, in
from rollout import rollout
File "E:\project\XAI-in-RL-master\multiagent_particles\experiments\rollout.py", line 8, in
from maddpg.trainer.maddpg import MADDPGAgentTrainer
File "E:\project\XAI-in-RL-master\multiagent_particles\experiments\maddpg\trainer\maddpg.py", line 6, in
from maddpg.common.distributions import make_pdtype
File "E:\project\XAI-in-RL-master\multiagent_particles\experiments\maddpg\common\distributions.py", line 5, in
from multiagent.multi_discrete import MultiDiscrete
ModuleNotFoundError: No module named 'multiagent'
What is the reason for this?
捕获

@xlws1
Copy link
Author

xlws1 commented Oct 31, 2024

I added multi-agent article env master to the system environment variable and resolved an error message that when I executed python launch.exe. py, it reported a series of Warnings but still did not run the experiment. I checked the files in the XAI in RL master \ multiagent_marticles directory and found three more places where an error message appeared before running pycharm. Does this affect the experiment? I am unable to resolve these error messages. Can you please help me?
1
2
3

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants