-
Notifications
You must be signed in to change notification settings - Fork 122
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Customizing my environment #273
Comments
Thank you very much for your recognition of our work. OmniSafe currently supports custom environment. Specifically:
However, if you still have questions about the customization of OmniSafe's environment after reading the above information, or if you encounter unexpected errors during the process, please feel free to provide more detailed information so that we can better assist you in resolving your issues. |
Thank you again. I currently have no more questions. |
How should I proceed if the action space of the environment I want to design is discrete, but Omnisafe only accepts continuous action spaces of the Box class? |
Currently, OmniSafe does not support discrete action space environments. We would support it in the future version since the discrete environment also matters a lot in the SafeRL area. |
I would like to quickly run the safe-rl algorithm in my personal environment. Is it feasible to discretize the actions directly in the 'step' function? |
I look forward to your prompt reply. |
If you directly discretize actions within the If this feature is crucial for you, we would greatly appreciate your leadership in adding this feature to Omnisafe, and we would also welcome your participation in the development of this feature.
|
Thank you for your response. As you mentioned, discretizing the actions predicted by the actor and inputting them directly to the environment did not yield satisfactory results. This feature is indeed important to me, but as a newcomer to RL, I will do my best to contribute to Omnisafe if possible. Thank you again. |
Feel free to open if you have further issue. |
Required prerequisites
Questions
Thank you very much for your contribution to this valuable repository.
I would like to quickly utilize the efficient safe-rl algorithm implemented in this repository in my own environment. Specifically, I have created a custom Unmanned Aerial Vehicle communication environment from scratch, including custom state and action spaces, as well as a custom reward function. I would like to quickly convert my custom environment into the API format accepted by this repository, but I haven't found many tutorials on creating custom environments. Could you please advise me on how to proceed? Could you recommend any resources for me to refer to?
Once again, thank you for your efforts, and I look forward to your response. Thank you!
The text was updated successfully, but these errors were encountered: