We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
运行命令: /usr/local/esm/bin/esm -s http://10.20.4.148:9204 -x jw_account_twitter_www -d http://10.20.5.92:9204 -y jw_account_twitter_www -w 10 -b 20 -c 5000 --sliced_scroll_size=5 --buffer_count=500000 -t 480m --refresh 日志输出: [05-21 14:02:07] [INF] [main.go:474,main] start data migration.. Scroll 245000 / 489705040 [>--------------------------------------------------------------------------------------------] 0.05% 3s Bulk 0 / 489705040 [--------------------------------------------------------------------------------------------] 0.00% [05-21 14:02:10] [ERR] [scroll.go:112,ProcessScrollResult] {"bytes_limit":19971597926,"bytes_wanted":23697591646,"durability":"PERMANENT","reason":"[parent] Data too large, data for [indices:data/read/sea Scroll 1965000 / 489705040 [>--------------------------------------------------------------------------------------------] 0.40% 41s Bulk 1262571 / 489705040 [>--------------------------------------------------------------------------------------------] 0.26% 4h19m20s [05-21 14:02:48] [ERR] [scroll.go:112,ProcessScrollResult] {"bytes_limit":19971597926,"bytes_wanted":22085485606,"durability":"PERMANENT","reason":"[parent] Data too large, data for [indices:data/read/sea Scroll 1975000 / 489705040 [>--------------------------------------------------------------------------------------------] 0.40% 41s Bulk 1262571 / 489705040 [>---------------------------------------------------------------------------------------------] 0.26% 4h19m20s [05-21 14:02:48] [ERR] [scroll.go:112,ProcessScrollResult]{"bytes_limit":19971597926,"bytes_wanted":24417518630,"durability":"PERMANENT","reason":"[parent] Data too large, data for [indices:data/read/sea Scroll 3505000 / 489705040 [=>--------------------------------------------------------------------------------------------] 0.72% 2m16s Bulk 3504849 / 489705040 [=>---------------------------------------------------------------------------------------------] 0.72% 2m16s [05-21 14:04:24] [INF] [main.go:505,main] data migration finished.
备注: 迁移的索引是以前从6版本迁移到7版本的,索引的type类型是content,现在又要从7.17.4迁移回6.8.6,但是迁移不成功,还有其它索引只迁到同一个位置就会停止迁移,日志输出迁移结束
例如: Bulk 188633543 / 524941135 [===================>--------------------------------------------] 35.93% 2h41m28s [05-21 13:16:21] [ERR] [scroll.go:110,ProcessScrollResult] {"bytes_limit":19971597926,"bytes_wanted":29064105238,"durability":"PERMANENT","reason":"[parent] Data too large, data for [indices:data/read/sea Scroll 193625000 / 524941135 [===================>--------------------------------------------] 36.89% 1h33m27s Bulk 193625000 / 524941135 [====================>--------------------------------------------] 36.89% 1h33m27s 每次只迁移到193625000文档数就会停止
请问是怎么回事?
The text was updated successfully, but these errors were encountered:
maybe scroll size is too large. you can set smaller. maybe you can try ela: https://github.com/CharellKing/ela
Sorry, something went wrong.
No branches or pull requests
运行命令:
/usr/local/esm/bin/esm -s http://10.20.4.148:9204 -x jw_account_twitter_www -d http://10.20.5.92:9204 -y jw_account_twitter_www -w 10 -b 20 -c 5000 --sliced_scroll_size=5 --buffer_count=500000 -t 480m --refresh
日志输出:
[05-21 14:02:07] [INF] [main.go:474,main] start data migration..
Scroll 245000 / 489705040 [>--------------------------------------------------------------------------------------------] 0.05% 3s
Bulk 0 / 489705040 [--------------------------------------------------------------------------------------------] 0.00%
[05-21 14:02:10] [ERR] [scroll.go:112,ProcessScrollResult] {"bytes_limit":19971597926,"bytes_wanted":23697591646,"durability":"PERMANENT","reason":"[parent] Data too large, data for [indices:data/read/sea
Scroll 1965000 / 489705040 [>--------------------------------------------------------------------------------------------] 0.40% 41s
Bulk 1262571 / 489705040 [>--------------------------------------------------------------------------------------------] 0.26% 4h19m20s
[05-21 14:02:48] [ERR] [scroll.go:112,ProcessScrollResult] {"bytes_limit":19971597926,"bytes_wanted":22085485606,"durability":"PERMANENT","reason":"[parent] Data too large, data for [indices:data/read/sea
Scroll 1975000 / 489705040 [>--------------------------------------------------------------------------------------------] 0.40% 41s
Bulk 1262571 / 489705040 [>---------------------------------------------------------------------------------------------] 0.26% 4h19m20s
[05-21 14:02:48] [ERR] [scroll.go:112,ProcessScrollResult]{"bytes_limit":19971597926,"bytes_wanted":24417518630,"durability":"PERMANENT","reason":"[parent] Data too large, data for [indices:data/read/sea
Scroll 3505000 / 489705040 [=>--------------------------------------------------------------------------------------------] 0.72% 2m16s
Bulk 3504849 / 489705040 [=>---------------------------------------------------------------------------------------------] 0.72% 2m16s
[05-21 14:04:24] [INF] [main.go:505,main] data migration finished.
备注:
迁移的索引是以前从6版本迁移到7版本的,索引的type类型是content,现在又要从7.17.4迁移回6.8.6,但是迁移不成功,还有其它索引只迁到同一个位置就会停止迁移,日志输出迁移结束
例如:
Bulk 188633543 / 524941135 [===================>--------------------------------------------] 35.93% 2h41m28s
[05-21 13:16:21] [ERR] [scroll.go:110,ProcessScrollResult] {"bytes_limit":19971597926,"bytes_wanted":29064105238,"durability":"PERMANENT","reason":"[parent] Data too large, data for [indices:data/read/sea
Scroll 193625000 / 524941135 [===================>--------------------------------------------] 36.89% 1h33m27s
Bulk 193625000 / 524941135 [====================>--------------------------------------------] 36.89% 1h33m27s
每次只迁移到193625000文档数就会停止
请问是怎么回事?
The text was updated successfully, but these errors were encountered: