Skip to content

I notice that MAPPO is supportted in 0.11.0 #77

Answered by Toni-SM
394262597 asked this question in Q&A
Discussion options

You must be logged in to vote

Hi @394262597

The training/evaluation of multi-agent RL algorithms using skrl requires the environment (wrapped environment) to have a specific interface.
The wrapped environment interface follows the Famama PettingZoo API as show in https://skrl.readthedocs.io/en/multi-agent/api/envs/multi_agents_wrapping.html

In your case, it is necessary to program the wrapper (inheriting from skrl's MultiAgentEnvWrapper base class)...
or (better!?) design your environment to follow the Bi-DexHands interface, then you can just use the skrl's Bi-DexHands wrapper

In the second case, your Omniverse Isaac Gym environment must have the following properties:

  • num_envs: int
  • num_agents: int
  • observation_space: …

Replies: 2 comments 4 replies

Comment options

You must be logged in to vote
3 replies
@394262597
Comment options

@Toni-SM
Comment options

@394262597
Comment options

Comment options

You must be logged in to vote
1 reply
@394262597
Comment options

Answer selected by 394262597
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants
Converted from issue

This discussion was converted from issue #76 on May 30, 2023 13:20.