Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Customizing my environment #273

Closed
3 tasks done
Royalvice opened this issue Sep 4, 2023 · 9 comments · May be fixed by #286
Closed
3 tasks done

Customizing my environment #273

Royalvice opened this issue Sep 4, 2023 · 9 comments · May be fixed by #286
Assignees
Labels
environment Something related to the RL environment question Further information is requested

Comments

@Royalvice
Copy link

Required prerequisites

Questions

Thank you very much for your contribution to this valuable repository.

I would like to quickly utilize the efficient safe-rl algorithm implemented in this repository in my own environment. Specifically, I have created a custom Unmanned Aerial Vehicle communication environment from scratch, including custom state and action spaces, as well as a custom reward function. I would like to quickly convert my custom environment into the API format accepted by this repository, but I haven't found many tutorials on creating custom environments. Could you please advise me on how to proceed? Could you recommend any resources for me to refer to?

Once again, thank you for your efforts, and I look forward to your response. Thank you!

@Royalvice Royalvice added the question Further information is requested label Sep 4, 2023
@Gaiejj Gaiejj added the environment Something related to the RL environment label Sep 4, 2023
@Gaiejj Gaiejj self-assigned this Sep 4, 2023
@Gaiejj
Copy link
Member

Gaiejj commented Sep 4, 2023

Thank you very much for your recognition of our work. OmniSafe currently supports custom environment. Specifically:

However, if you still have questions about the customization of OmniSafe's environment after reading the above information, or if you encounter unexpected errors during the process, please feel free to provide more detailed information so that we can better assist you in resolving your issues.

@Royalvice
Copy link
Author

Thank you again. I currently have no more questions.

@Royalvice
Copy link
Author

Thank you very much for your recognition of our work. OmniSafe currently supports custom environment. Specifically:

However, if you still have questions about the customization of OmniSafe's environment after reading the above information, or if you encounter unexpected errors during the process, please feel free to provide more detailed information so that we can better assist you in resolving your issues.

How should I proceed if the action space of the environment I want to design is discrete, but Omnisafe only accepts continuous action spaces of the Box class?

@Gaiejj
Copy link
Member

Gaiejj commented Sep 6, 2023

Currently, OmniSafe does not support discrete action space environments. We would support it in the future version since the discrete environment also matters a lot in the SafeRL area.

@Royalvice
Copy link
Author

Currently OmniSafe does not support discrete action space environment. We would support it in future version since discrete environment also matters a lot in SafeRL area. We feel sorry that OmniSafe do not meet your requirement currently.

I would like to quickly run the safe-rl algorithm in my personal environment. Is it feasible to discretize the actions directly in the 'step' function?

@Royalvice
Copy link
Author

I look forward to your prompt reply.

@zmsn-2077
Copy link
Member

If you directly discretize actions within the step function, you need to consider whether the current algorithm supports discrete inputs. Currently, the algorithms we have implemented cannot handle discrete action inputs directly. This is mainly because most of the algorithm's original authors did not specify the algorithm's performance in a discrete environment. Supporting discrete action inputs is on our roadmap, but it will be implemented in future versions.

If this feature is crucial for you, we would greatly appreciate your leadership in adding this feature to Omnisafe, and we would also welcome your participation in the development of this feature.

According to our original roadmap, we will have this feature updated by early October.

@zmsn-2077 zmsn-2077 reopened this Sep 8, 2023
@Royalvice
Copy link
Author

Royalvice commented Sep 8, 2023

Thank you for your response. As you mentioned, discretizing the actions predicted by the actor and inputting them directly to the environment did not yield satisfactory results. This feature is indeed important to me, but as a newcomer to RL, I will do my best to contribute to Omnisafe if possible. Thank you again.

@Gaiejj
Copy link
Member

Gaiejj commented Sep 24, 2023

Feel free to open if you have further issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
environment Something related to the RL environment question Further information is requested
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants