Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use actual number of input splits when calculating target throughput per task #189

Open
anmsf opened this issue Nov 22, 2023 · 0 comments
Open

Comments

@anmsf
Copy link
Contributor

anmsf commented Nov 22, 2023

Right now, it looks like the lib uses the max number of possible tasks for calculating target throughput per task:

long throughputPerTask = Math.max((long) (calculatedThroughput / maxParallelTasks), 1);

maxParallelTasks = Math.min(calculateMaxMapTasks(totalMapTasks), totalMapTasks);

But for provisioned mode with autoscaling and low table minimum throughput [say, 100] it ends up significantly underestimating per task throughput for smaller inputs. This works out okay when there's a large enough input so actual mappers is similar to max mappers, but for smaller inputs it can get stuck at a low throughput and never trigger autoscale.

We've seen larger inputs trigger autoscaling all the way up to 500k which is what we want. But for smaller inputs we saw a case where it kept on writing at 1 task/minute since there were just 8 tasks instead of the max 48 the library assumes so we never hit the autoscale threshold.

Is it possible to look at the actual number of splits instead?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant