You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The OSM database is pretty hefty which means that dumping in as parallel a fashion as possible is desirable. Doing this, however, results in a directory output (https://www.postgresql.org/docs/9.4/static/backup-dump.html) and it looks right now as though planet-dump-ng expects a single file based on the documentation in planet-dump-ng --help and a little bit of digging in the source.
How difficult would it be to deal with the multipart table images created in parallel dumps?
The text was updated successfully, but these errors were encountered:
Actually, it looks like pg_restore is being used behind the scenes here, which means that directories are potentially OK as inputs. Perhaps this just indicates that some documentation additions would be helpful. Happy to provide the PRs if there's a desire for that
Yup! You're absolutely right, the code just forks off a separate copy of pg_restore to read each table. Since dump_file is just a string passed through from the arguments, it should work with either a file or directory dump, although I've only tested with files.
Please try it and let me know how it goes! PRs to improve the docs or code would be very welcome.
The OSM database is pretty hefty which means that dumping in as parallel a fashion as possible is desirable. Doing this, however, results in a directory output (https://www.postgresql.org/docs/9.4/static/backup-dump.html) and it looks right now as though
planet-dump-ng
expects a single file based on the documentation inplanet-dump-ng --help
and a little bit of digging in the source.How difficult would it be to deal with the multipart table images created in parallel dumps?
The text was updated successfully, but these errors were encountered: