You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm experimenting with compressing a somewhat bigger sqlite database, but sqlite keeps on running out of memory and is being killed by the OOM killer when I run SELECT zstd_incremental_maintenance(null, 1);
[2022-09-03T06:44:48Z INFO sqlite_zstd::transparent] tiles_data.tile_data: Total 6142360 rows (26.48GB) to potentially compress (split in 1 groups).
I have 32GB of ram (4GB used when starting the maintenance), and it still failed when I added a 20GB swapfile.
Out of memory: Killed process 270983 (sqlite3) total-vm:53662032kB, anon-rss:25559800kB, file-rss:1172kB, shmem-rss:0kB, UID:1000 pgtables:84908kB oom_score_adj:0
I then used a 40GB swapfile, and it seems to proceed a bit better, but finally it does fail on:
Runtime error: getting dict
Caused by:
0: Training dictionary failed
Caused by:
Src size is incorrect
1: Error code 1: SQL error or missing database
I'm experimenting with compressing a somewhat bigger sqlite database, but sqlite keeps on running out of memory and is being killed by the OOM killer when I run
SELECT zstd_incremental_maintenance(null, 1);
I have 32GB of ram (4GB used when starting the maintenance), and it still failed when I added a 20GB swapfile.
I then used a 40GB swapfile, and it seems to proceed a bit better, but finally it does fail on:
Which seems to be the same issue as #11
After updating the
train_dict_samples_ratio
it seems to continue and use a lot less memory.The text was updated successfully, but these errors were encountered: