-
Notifications
You must be signed in to change notification settings - Fork 38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DynamoDB WCU is not utilized #23
Comments
I noticed the same thing from the metrics that its using on average 55 WCU. I have a table with over 60k items around 11kb per item size, and its taking at least 10 minutes to copy the first 3k items even though I've defined 1000 WCU. For now, I ended up testing using this AWS Labs library (its in Java though). It performed 68k items in 18 mins with 100 RCU and 1000 WCU. |
I believe the library needs to be optimized for some cases to support parallel writing to take full capacity of the selected WCU. |
This tool is exactly what I need - I have close to 500K records to be migrated and would be very helpful if this tool took into consideration the source and target Read/Write capacities. |
@asuresh26 still trying to find time for updating this repo. |
@enGMzizo - Thanks much for this amazing tool. I migrated 500k and was hoping to scale up to a few millions if performance was good. |
I set the minimum RCU and WCU to 1000 and it worked for me |
Dears:
First I have to say, this is a really helpful tool.
However when I used it to copy my source table to my target table, on CloudWatch I found that only 25 WCU was consumed per second. (Both my source and target tables have more than 150 RCU and more than 1000 WCU)
I am assuming the limitation of 25 request per BatchWriteItem API was met. Is it as expected?
Kind Regard :')
The text was updated successfully, but these errors were encountered: