-
Notifications
You must be signed in to change notification settings - Fork 214
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Policy lock exception, HTTP Status: 423 Locked #2151
Comments
You will have to manage it by specifying -parallelism=?. There is no way around that. |
@duytiennguyen-okta I think it would be really great if you can document this limitation in the okta_group_rule resource page. This would save a lot of precious time. :-) Thank you so much |
@duytiennguyen-okta I would request Okta to reconsider the decision of not fixing this issue. With just only one thread the deployment takes very long time to finish. This would become even slower when we have more and more resources manged under Terraform. Thanks |
@duytiennguyen-okta Is the lack of concurrency support on this API documented anywhere? I think this is the first Okta API that i've come across where I can't update multiple resources concurrently |
@supratikg One thing you can do is file a support ticket to modified the API rate limits at a granular level on your account by your Okta account manager. You can see your rate limit in your Admin Console: Reports > Rate Limits. You can also try to increase parallelism to more than 1. I will also need the support ticket to raise issue with the API team to potentially switch from 429 to 423 and use the backOff strategy that we have |
@exitcode0 It is not that Okta API does not support concurrency, I think it is hitting rate limits. @supratikg mentioned that he has hundreds of groups and group rule on top of that. My expectation is that they both fall under api/v1/groups rl bucket which is why it is return 423. In regards to return 429 or 423, I will have to speak with the API team about that |
Thanks for your reply. This is my Okta internal case id 02193358. We didn't receive any API rate limit warning, however I will verify the same. Does Terraform retry if the response code is 423? |
To expand on my previous concurrency statements: That said, I think the concurrency is the lower priority here as that's likely a harder problem to fix |
@exitcode0 @supratikg Upon discussion with the API team, this API endpoint does not support parallelization well. The problem is with all the new policy rule created, the API need to lock all the policy rule and update the priority for all the existing rule. That is why it is returning 423. I will the provider to support retry on 423 but this is fundamentally because the API does not support parallelization. |
OKTA internal reference https://oktainc.atlassian.net/browse/OKTA-849353 |
Hello,
We are having hundreds of Okta group rules which are updated simultaneously. We first create a group and then a group rule where the group is being used in the rule policy. The code works fine but sometimes some of the group rule requests fails with the below error, while creating the resource.
E0000239: Policy lock exception
HTTP Status: 423 Locked
Policy priorities are being reconciled. Please try again later.
We discussed the problem with Okta and they mentioned that "the SDK does not have parallelism built into it". As Terraform is running with 10 parallel threads by default, it is ending up into this problem. Okta suggests that this should be handled at the client end. In this case it is the Okta Terraform provider which we are using.
The error doesn't appear anymore when we run apply with the flag -parallelism=1. However, this increases the amount of time the apply runs.
Could you please look into the issue and provide us a fix for the same?
Thanks in advance
Terraform Version
Terraform v1.9.5
Affected Resource(s)
Expected Behavior
No error
Can this be done in the Admin UI?
I don't know
Can this be done in the actual API call?
I don't know
Steps to Reproduce
terraform apply
The text was updated successfully, but these errors were encountered: