You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
But for provisioned mode with autoscaling and low table minimum throughput [say, 100] it ends up significantly underestimating per task throughput for smaller inputs. This works out okay when there's a large enough input so actual mappers is similar to max mappers, but for smaller inputs it can get stuck at a low throughput and never trigger autoscale.
We've seen larger inputs trigger autoscaling all the way up to 500k which is what we want. But for smaller inputs we saw a case where it kept on writing at 1 task/minute since there were just 8 tasks instead of the max 48 the library assumes so we never hit the autoscale threshold.
Is it possible to look at the actual number of splits instead?
The text was updated successfully, but these errors were encountered:
Right now, it looks like the lib uses the max number of possible tasks for calculating target throughput per task:
emr-dynamodb-connector/emr-dynamodb-hadoop/src/main/java/org/apache/hadoop/dynamodb/write/WriteIopsCalculator.java
Line 82 in ee52fdf
maxParallelTasks = Math.min(calculateMaxMapTasks(totalMapTasks), totalMapTasks);
But for provisioned mode with autoscaling and low table minimum throughput [say, 100] it ends up significantly underestimating per task throughput for smaller inputs. This works out okay when there's a large enough input so actual mappers is similar to max mappers, but for smaller inputs it can get stuck at a low throughput and never trigger autoscale.
We've seen larger inputs trigger autoscaling all the way up to 500k which is what we want. But for smaller inputs we saw a case where it kept on writing at 1 task/minute since there were just 8 tasks instead of the max 48 the library assumes so we never hit the autoscale threshold.
Is it possible to look at the actual number of splits instead?
The text was updated successfully, but these errors were encountered: