-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Why discrete the action? #3
Comments
Hello Xue Liu,
In RL, MDP formulation can be discrete or continuous depending on the
control environment and agent design. In this case, I applied SAC in the
discrete action domain on the Cartpole game. There is a good summary of
discrete implementation for SAC in this paper:
https://arxiv.org/abs/1910.07207
…On Wed, Dec 7, 2022 at 6:48 AM Xue Liu ***@***.***> wrote:
Hello, Dr.Haydari. I am a beginner in constraint RL. Through reading your
code, I found you discrete the action. Can you tell me the reason? Thanks!
—
Reply to this email directly, view it on GitHub
<#3>, or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABMOULKA6WHTAV3A6VTGDZDWMCPUTANCNFSM6AAAAAASW47T5M>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
--
Ammar Haydari
PhD Student
UC Davis
|
Hello, Dr.Haydari. Thanks for replying. Since the original action space is continuous, is it superfluous to do this? However, when i change the code to adapt the original continuous action space. It didn't work well. Here is the code revised. ` import os class SAC(object):
` |
Hello, Dr.Haydari. After fixing several error. I found it works. Here is the code: `import os class SAC(object):
` |
@xueliu8617112 |
The value of self.cost_lim depends on your env setting. U can try a min value. |
|
Hello, Dr.Haydari. I am a beginner in constraint RL. Through reading your code, I found you discrete the action. Can you tell me the reason? Thanks!
The text was updated successfully, but these errors were encountered: