Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Configured throughput and percentage ignored #183

Open
anmsf opened this issue May 8, 2023 · 1 comment
Open

Configured throughput and percentage ignored #183

anmsf opened this issue May 8, 2023 · 1 comment

Comments

@anmsf
Copy link
Contributor

anmsf commented May 8, 2023

No matter what write throughput we configure (dynamodb.throughput.write) or what percentage (dynamodb.throughput.write.percent) our hive job completely blows past any configured limits. Ultimately the throughput ends up being entirely dependent on the number of mappers (At least beyond a certain number of mappers)

Example:

 Caused by: java.lang.RuntimeException: com.amazonaws.services.dynamodbv2.model.RequestLimitExceededException: Throughput exceeds the current throughput limit for your account. Please contact AWS Support at [https://aws.amazon.com/support ](https://aws.amazon.com/support)  request a limit increase (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: RequestLimitExceeded; Request ID: <>; Proxy: null)
	at org.apache.hadoop.dynamodb.DynamoDBFibonacciRetryer.handleException(DynamoDBFibonacciRetryer.java:106)
	at org.apache.hadoop.dynamodb.DynamoDBFibonacciRetryer.runWithRetry(DynamoDBFibonacciRetryer.java:81)
	at org.apache.hadoop.dynamodb.DynamoDBClient.writeBatch(DynamoDBClient.java:263)
	at org.apache.hadoop.dynamodb.DynamoDBClient.putBatch(DynamoDBClient.java:220)

It does not look like that particular exception is retried, it’s re-thrown from the else block in the handleException() method. I don’t see any max retry limit for the exceptions it /does/ handle: https://github.com/awslabs/emr-dynamodb-connector/blob/e1c937ff5d55711c91b575d33fa0281bc5fb4323/emr-dynamodb-hadoop/src/main/java/org/apache/hadoop/dynamodb/DynamoDBFibonacciRetryer.java#L106

  1. The connector apparently only retries for ProvisionedThroughputExceededException, ThrottlingException, 500s and 503s. So I don’t think it gets the normal connector retries. It’s only just getting the usual 4 hive task retries and then failing.
    I’m arguing here that ideally we shouldn’t have to care about exceeding max throughput. Ideally, the connector should be handling throughput till the max we specify, and throttle as needed to remain within table and account throughput bounds.

  2. The other thing is, throughput ends up being determined by the total number of input splits/mappers. Is it reasonable to change the library to just not write but wait in certain map tasks when the configured throughput/percentage is hit? Like right now looks like it has a minimum to 1 item "per second" though AFAICS it's basically 1 per put API call and not 1 per second per task.

@mimaomao
Copy link
Contributor

As we can see from here, if the auto scaling is enabled, configured write throughput will be ignored. The user can disable auto scaling via config dynamodb.throughput.write.autoscaling=false. However, I don't think this config can really help the use case you mentioned.

Also, as we can see from here, RequestLimitExceededException is not added into throttleErrorCodes. We can at least simply fix this issue by adding a new entry RequestLimitExceededException to retry when the application gets throttled.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants