Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEATURE] Can pass1 be re-sorted for pass2? #982

Open
forgotPassword opened this issue Aug 17, 2024 · 4 comments
Open

[FEATURE] Can pass1 be re-sorted for pass2? #982

forgotPassword opened this issue Aug 17, 2024 · 4 comments

Comments

@forgotPassword
Copy link

I am running on Windows, 32GB of RAM.

When generating tiles for say Canada (4GB), the generated node.db is 6GB and the OS auto memory-maps it correctly. During step2_ways I get 100-150k/s, and almost no HD activity.

But for the US (10GB), the generated node.db is 16GB, and for some reason the OS does not like that, during step2_ways processing is 2-5k/s, aka 30x-40x slower, and HD is at 100% usage.

Can node.db (or another copy of it) be pre sorted for the ways step? So the reads will be sequential and avoid the random read penalty?

Perhaps it makes sense to update docs that for good performance 1x-1.6x of RAM is required.

Thanks!

@msbarry
Copy link
Contributor

msbarry commented Aug 17, 2024

Can you share the exact command you are running and logs? I don't have a windows machine to test on but with Linux pass2 averages 27k ways per second even with 8GB of RAM (https://github.com/onthegomap/planetiler/blob/main/planet-logs/v0.7.0-planet-im4gn-8gb-no-z13-building-merge.txt) so it seems like it should be faster.

I'm trying to understand how the sort would work, seems like at some point it would still need to do the random access lookups to join the node locations, we'd just push that lookup to the sort stage. Curious to see the command and logs though, there might be something else going on.

@forgotPassword
Copy link
Author

Would be great if you could reproduce the issue. Win 10 with HDD (not SSD, maybe that's the secret ingredient).

canada:

java -Xmx6g -jar planetiler_0.8.2.jar --force --keep_unzipped=true --natural_earth_keep_unzipped=true --maxzoom=14 --osm_path=..\..\datasources\regions\canada-latest.osm.pbf --output=ca.pmtiles --only_layers=landuse,water,transportation,landcover,place,boundary,poi,transportation_name,building,housenumber,waterway
0:00:00 DEB - argument: config=null (path to config file)
0:00:00 DEB - argument: download_dir=data\sources (download directory)
0:00:00 DEB - argument: area=monaco (name of the extract to download if osm_url/osm_path not specified (i.e. 'monaco' 'rhode island' 'australia' or 'planet'))
0:00:00 INF - argument: stats=use in-memory stats
0:00:00 DEB - argument: madvise=true (default value for whether to use linux madvise(random) to improve memory-mapped read performance for temporary storage)
0:00:00 DEB - argument: storage=mmap (default storage type for temporary data, one of [ram, mmap, direct])
0:00:00 DEB - argument: threads=12 (num threads)
0:00:00 DEB - argument: write_threads=1 (number of threads to use when writing temp features)
0:00:00 DEB - argument: process_threads=11 (number of threads to use when processing input features)
0:00:00 DEB - argument: bounds=null (bounds)
0:00:00 DEB - argument: polygon=null (a .poly file that limits output to tiles intersecting the shape)
0:00:00 DEB - argument: minzoom=0 (minimum zoom level)
0:00:00 DEB - argument: maxzoom=14 (maximum zoom level up to 15)
0:00:00 DEB - argument: render_maxzoom=14 (maximum rendering zoom level up to 15)
0:00:00 DEB - argument: tmpdir=data\tmp (temp directory)
0:00:00 DEB - argument: feature_read_threads=1 (number of threads to use when reading features at tile write time)
0:00:00 DEB - argument: tile_write_threads=1 (number of threads used to write tiles - only supported by [files, csv, tsv, proto, pbf, json])
0:00:00 DEB - argument: loginterval=10 seconds (time between logs)
0:00:00 DEB - argument: force=true (overwriting output file and ignore disk/RAM warnings)
0:00:00 DEB - argument: append=false (append to the output file - only supported by [files, csv, tsv, proto, pbf, json])
0:00:00 DEB - argument: compress_temp=false (compress temporary feature storage (uses more CPU, but less disk space))
0:00:00 DEB - argument: mmap_temp=true (use memory-mapped IO for temp feature files)
0:00:00 DEB - argument: sort_max_readers=6 (maximum number of concurrent read threads to use when sorting chunks)
0:00:00 DEB - argument: sort_max_writers=6 (maximum number of concurrent write threads to use when sorting chunks)
0:00:00 DEB - argument: nodemap_type=sparsearray (type of node location map, one of [noop, sortedtable, sparsearray, array])
0:00:00 DEB - argument: nodemap_storage=mmap (storage for node location map, one of [ram, mmap, direct])
0:00:00 DEB - argument: nodemap_madvise=true (use linux madvise(random) for node locations)
0:00:00 DEB - argument: multipolygon_geometry_storage=mmap (storage for multipolygon geometries, one of [ram, mmap, direct])
0:00:00 DEB - argument: multipolygon_geometry_madvise=true (use linux madvise(random) for temporary multipolygon geometry storage)
0:00:00 DEB - argument: http_user_agent=Planetiler downloader (https://github.com/onthegomap/planetiler) (User-Agent header to set when downloading files over HTTP)
0:00:00 DEB - argument: http_timeout=30 seconds (Timeout to use when downloading files over HTTP)
0:00:00 DEB - argument: http_retries=1 (Retries to use when downloading files over HTTP)
0:00:00 DEB - argument: http_retry_wait=5 seconds (How long to wait before retrying HTTP request)
0:00:00 DEB - argument: download_chunk_size_mb=100 (Size of file chunks to download in parallel in megabytes)
0:00:00 DEB - argument: download_threads=1 (Number of parallel threads to use when downloading each file)
0:00:00 DEB - argument: download_max_bandwidth= (Maximum bandwidth to consume when downloading files in units mb/s, mbps, kbps, etc.)
0:00:00 DEB - argument: min_feature_size_at_max_zoom=0.0625 (Default value for the minimum size in tile pixels of features to emit at the maximum zoom level to allow for overzooming)
0:00:00 DEB - argument: min_feature_size=1.0 (Default value for the minimum size in tile pixels of features to emit below the maximum zoom level)
0:00:00 DEB - argument: simplify_tolerance_at_max_zoom=0.0625 (Default value for the tile pixel tolerance to use when simplifying features at the maximum zoom level to allow for overzooming)
0:00:00 DEB - argument: simplify_tolerance=0.1 (Default value for the tile pixel tolerance to use when simplifying features below the maximum zoom level)
0:00:00 DEB - argument: osm_lazy_reads=true (Read OSM blocks from disk in worker threads)
0:00:00 DEB - argument: skip_filled_tiles=false (Skip writing tiles containing only polygon fills to the output)
0:00:00 DEB - argument: tile_warning_size_mb=1.0 (Maximum size in megabytes of a tile to emit a warning about)
0:00:00 DEB - argument: color=null (Color the terminal output)
0:00:00 DEB - argument: keep_unzipped=true (keep unzipped sources by default after reading)
0:00:00 DEB - argument: tile_compression=gzip (the tile compression, one of [none, gzip])
0:00:00 DEB - argument: output_layerstats=false (output a tsv.gz file for each tile/layer size)
0:00:00 DEB - argument: debug_url=https://onthegomap.github.io/planetiler-demo/#{z}/{lat}/{lon} (debug url to use for displaying tiles with {z} {lat} {lon} placeholders)
0:00:00 DEB - argument: tile_weights=data\tile_weights.tsv.gz (tsv.gz file with columns z,x,y,loads to generate weighted average tile size stat)
0:00:00 DEB - argument: max_point_buffer=Infinity (Max tile pixels to include points outside tile bounds. Set to a lower value to reduce tile size for clients that handle label collisions across tiles (most web and native clients). NOTE: Do not reduce if you need to support raster tile rendering)
0:00:00 DEB - argument: log_jts_exceptions=false (Emit verbose details to debug JTS geometry errors)
0:00:00 DEB - argument: feature_source_id_multiplier=10 (Set vector tile feature IDs to (featureId * thisValue) + sourceId where sourceId is 1 for OSM nodes, 2 for ways, 3 for relations, and 0 for other sources. Set to false to disable.)
0:00:00 DEB - argument: only_download=false (download source data then exit)
0:00:00 DEB - argument: tests=null (run test cases in a yaml then quit)
0:00:00 DEB - argument: download=false (download sources)
0:00:00 DEB - argument: refresh_sources=false (download new version of source files if they have changed)
0:00:00 DEB - argument: download_osm_tile_weights=false (download OSM tile weights file)
0:00:00 DEB - argument: temp_nodes=data\tmp\node.db (temp node db location)
0:00:00 DEB - argument: temp_multipolygons=data\tmp\multipolygon.db (temp multipolygon db location)
0:00:00 DEB - argument: temp_features=data\tmp\feature.db (temp feature db location)
0:00:00 DEB - argument: osm_parse_node_bounds=false (parse bounds from OSM nodes instead of header)
0:00:00 DEB - argument: only_fetch_wikidata=false (fetch wikidata translations then quit)
0:00:00 DEB - argument: fetch_wikidata=false (fetch wikidata translations then continue)
0:00:00 DEB - argument: use_wikidata=true (use wikidata translations)
0:00:00 DEB - argument: wikidata_cache=data\sources\wikidata_names.json (wikidata cache file)
0:00:00 DEB - argument: lake_centerlines_path=data\sources\lake_centerline.shp.zip (lake_centerlines shapefile path)
0:00:00 DEB - argument: refresh_lake_centerlines=false (Download new version of lake_centerlines if changed)
0:00:00 DEB - argument: free_lake_centerlines_after_read=false (delete lake_centerlines input file after reading to make space for output (reduces peak disk usage))
0:00:00 DEB - argument: water_polygons_path=data\sources\water-polygons-split-3857.zip (water_polygons shapefile path)
0:00:00 DEB - argument: refresh_water_polygons=false (Download new version of water_polygons if changed)
0:00:00 DEB - argument: free_water_polygons_after_read=false (delete water_polygons input file after reading to make space for output (reduces peak disk usage))
0:00:00 DEB - argument: natural_earth_path=data\sources\natural_earth_vector.sqlite.zip (natural_earth sqlite db path)
0:00:00 DEB - argument: refresh_natural_earth=false (Download new version of natural_earth if changed)
0:00:00 DEB - argument: free_natural_earth_after_read=false (delete natural_earth input file after reading to make space for output (reduces peak disk usage))
0:00:00 DEB - argument: natural_earth_keep_unzipped=true (keep unzipped natural_earth after reading)
0:00:00 DEB - argument: osm_path=..\..\datasources\regions\canada-latest.osm.pbf (osm OSM input file path)
0:00:00 DEB - argument: refresh_osm=false (Download new version of osm if changed)
0:00:00 DEB - argument: free_osm_after_read=false (delete osm input file after reading to make space for output (reduces peak disk usage))
0:00:00 INF - madvise not available on this system
0:00:00 DEB - argument: output=ca.pmtiles (output tile archive path)
0:00:00 DEB - argument: version=false (show version then exit)
0:00:00 INF - Planetiler build git hash: a74f274c3162cd547c9b19a9a5e8705982a61755
0:00:00 INF - Planetiler build version: 0.8.2
0:00:00 INF - Planetiler build timestamp: 2024-07-02T11:38:28.480Z
0:00:00 DEB - argument: transliterate=true (attempt to transliterate latin names)
0:00:00 DEB - argument: languages=am,ar,az,be,bg,bn,br,bs,ca,co,cs,cy,da,de,el,en,eo,es,et,eu,fa,fi,fr,fy,ga,gd,he,hi,hr,hu,hy,id,is,it,ja,ja_kana,ja_rm,ja-Latn,ja-Hira,ka,kk,kn,ko,ko-Latn,ku,la,lb,lt,lv,mk,mt,ml,nl,no,oc,pa,pnb,pl,pt,rm,ro,ru,sk,sl,sq,sr,sr-Latn,sv,ta,te,th,tr,uk,ur,vi,zh,zh-Hant,zh-Hans (languages to use)
0:00:00 DEB - argument: only_layers=landuse,water,transportation,landcover,place,boundary,poi,transportation_name,building,housenumber,waterway (Include only certain layers)
0:00:00 DEB - argument: exclude_layers= (Exclude certain layers)
0:00:00 DEB - argument: boundary_country_names=true (boundary layer: add left/right codes of neighboring countries)
0:00:00 DEB - argument: boundary_osm_only=false (boundary layer: only use OSM, even at low zoom levels)
0:00:00 DEB - argument: transportation_z13_paths=false (transportation(_name) layer: show all paths on z13)
0:00:00 DEB - argument: building_merge_z13=true (building layer: merge nearby buildings at z13)
0:00:00 DEB - argument: transportation_name_brunnel=false (transportation_name layer: set to false to omit brunnel and help merge long highways)
0:00:00 DEB - argument: transportation_name_size_for_shield=false (transportation_name layer: allow road names on shorter segments (ie. they will have a shield))
0:00:00 DEB - argument: transportation_name_limit_merge=false (transportation_name layer: limit merge so we don't combine different relations to help merge long highways)
0:00:00 DEB - argument: transportation_name_minor_refs=false (transportation_name layer: include name and refs from minor road networks if not present on a way)
0:00:00 DEB - argument: help=false (show arguments then exit)
0:00:00 DEB - argument: layer_stats=G:\project\_bin\planetiler\ca.pmtiles.layerstats.tsv.gz (layer stats output path)
0:00:00 INF - Building OpenMapTilesProfile profile into file:///G:/project/_bin/planetiler/ca.pmtiles in these phases:
0:00:00 INF -   lake_centerlines: Process features in data\sources\lake_centerline.shp.zip
0:00:00 INF -   water_polygons: Process features in data\sources\water-polygons-split-3857.zip
0:00:00 INF -   natural_earth: Process features in data\sources\natural_earth_vector.sqlite.zip
0:00:00 INF -   osm_pass1: Pre-process OpenStreetMap input (store node locations then relation members)
0:00:00 INF -   osm_pass2: Process OpenStreetMap nodes, ways, then relations
0:00:00 INF -   sort: Sort rendered features by tile ID
0:00:00 INF -   archive: Encode each tile and write to TileArchiveConfig[format=PMTILES, scheme=FILE, uri=file:///G:/project/_bin/planetiler/ca.pmtiles, options={}]
0:00:00 INF - no wikidata translations found, run with --fetch-wikidata to download
0:00:00 DEB - Γ£ô 19G storage on (G:) requested for read phase disk, 50G available
0:00:00 DEB -  - 4.4G used for temporary node location cache
0:00:00 DEB -  - 678M used for temporary multipolygon geometry cache
0:00:00 DEB -  - 14G used for temporary feature storage
0:00:00 DEB - Γ£ô 22G storage on (G:) requested for write phase disk, 50G available
0:00:00 DEB -  - 14G used for temporary feature storage
0:00:00 DEB -  - 7.3G used for archive output
0:00:00 DEB - Γ£ô 5.1G storage on (G:) requested for read phase, 50G available
0:00:00 DEB -  - 4.4G used for sparsearray node location cache
0:00:00 DEB -  - 678M used for multipolygon way geometries
0:00:00 DEB - Γ£ô 1.6G JVM heap requested for read phase, 6.4G available
0:00:00 DEB -  - 300M used for sparsearray node location in-memory index
0:00:00 DEB -  - 1.3G used for temporary profile storage
0:00:00 DEB - Γ£ô 5.1G temporary files and 27G of free memory for OS to cache them
0:00:00 INF - Setting map bounds from input: Env[-141.7761 : -44.17684, 41.660089 : 85.04032]
0:00:00 DEB - argument: archive_name=OpenMapTiles ('name' attribute for tileset metadata)
0:00:00 DEB - argument: archive_description=A tileset showcasing all layers in OpenMapTiles. https://openmaptiles.org ('description' attribute for tileset metadata)
0:00:00 DEB - argument: archive_attribution=<a href="https://www.openmaptiles.org/" target="_blank">&copy; OpenMapTiles</a> <a href="https://www.openstreetmap.org/copyright" target="_blank">&copy; OpenStreetMap contributors</a> ('attribution' attribute for tileset metadata)
0:00:00 DEB - argument: archive_version=3.15.0 ('version' attribute for tileset metadata)
0:00:00 DEB - argument: archive_type=baselayer ('type' attribute for tileset metadata)
0:00:00 DEB - argument: archive_format=pbf ('format' attribute for tileset metadata)
0:00:00 INF - Using merge sort feature map, chunk size=2000mb max workers=12
0:00:00 INF [lake_centerlines] - Skipping since profile does not use it
0:00:00 INF [water_polygons] -
0:00:00 INF [water_polygons] - Starting...
0:00:08 INF [water_polygons:write] - madvise not available on this system to speed up temporary feature IO.
0:00:18 INF [water_polygons] -  read: [  404  16%   40/s ] write: [ 1.4M 145k/s ] 2G
    cpus: 11.6 gc:  5% heap: 1.4G/6.4G direct: 54M postGC: 856M
    ->     (0/3) -> read(74%) ->   (1k/1k) -> process(80% 77% 63% 64% 73% 72% 80% 79% 72% 64% 71%) -> (5.8k/62k) -> write( 3%)
0:00:28 INF [water_polygons] -  read: [  979  40%   57/s ] write: [ 3.4M 202k/s ] 2G
    cpus: 11.4 gc:  5% heap: 1.4G/6.4G direct: 54M postGC: 1.2G
    ->     (0/3) -> read(20%) ->   (1k/1k) -> process(94% 94% 94% 94% 94% 93% 93% 94% 94% 93% 93%) -> (4.1k/62k) -> write( 3%)
0:00:38 INF [water_polygons] -  read: [ 1.7k  71%   76/s ] write: [ 6.6M 310k/s ] 2G
    cpus: 11 gc:  4% heap: 1.4G/6.4G direct: 54M postGC: 1.2G
    ->     (0/3) -> read( -%) ->  (726/1k) -> process(95% 96% 95% 95% 96% 96% 96% 96% 96% 95% 95%) -> (4.1k/62k) -> write( 5%)
0:00:41 INF [water_polygons] -  read: [ 2.4k 100%  265/s ] write: [  21M 5.4M/s ] 1G
    cpus: 10.2 gc:  2% heap: 2.4G/6.4G direct: 54M postGC: 1.2G
    ->     (0/3) -> read( -%) ->    (0/1k) -> process( -%  -%  -%  -%  -%  -%  -%  -%  -%  -%  -%) ->   (0/62k) -> write( -%)
0:00:41 INF [water_polygons] - Finished in 41s cpu:6m22s gc:2s avg:9.4
0:00:41 INF [water_polygons] -   read     1x(23% 9s wait:14s done:15s)
0:00:41 INF [water_polygons] -   process 11x(70% 28s done:8s)
0:00:41 INF [water_polygons] -   write    1x(8% 3s wait:29s done:8s)
0:00:41 INF [natural_earth] -
0:00:41 INF [natural_earth] - Starting...
0:00:50 INF [natural_earth] -  read: [ 349k 100%  43k/s ] write: [ 2.8k  350/s ] 1G
    cpus: 1.9 gc:  0% heap: 2.1G/6.4G direct: 54M postGC: 1.2G
    ->     (0/3) -> read( -%) ->    (0/1k) -> process( -%  -%  -%  -%  -%  -%  -%  -%  -%  -%  -%) ->   (0/62k) -> write( -%)
0:00:50 INF [natural_earth] - Finished in 9s cpu:16s avg:1.8
0:00:50 INF [natural_earth] -   read     1x(91% 8s)
0:00:50 INF [natural_earth] -   process 11x(3% 0.2s wait:8s)
0:00:50 INF [natural_earth] -   write    1x(0% 0s wait:8s)
0:00:50 INF [osm_pass1] -
0:00:50 INF [osm_pass1] - Starting...
0:01:00 INF [osm_pass1] -  nodes: [  81M 8.1M/s ] 1.6G  ways: [    0    0/s ] rels: [    0    0/s ] blocks: [  10k   1k/s ]
    cpus: 3.8 gc:  3% heap: 480M/6.4G direct: 54M postGC: 255M hppc:  752
    read( 4%) ->    (0/31) -> parse(24% 22% 25% 22% 24% 26% 25% 23% 23% 24%) ->   (21/22) -> process(96%)
0:01:10 INF [osm_pass1] -  nodes: [ 180M 9.8M/s ] 3.2G  ways: [    0    0/s ] rels: [    0    0/s ] blocks: [  22k 1.2k/s ]
    cpus: 3.6 gc:  3% heap: 398M/6.4G direct: 54M postGC: 313M hppc:  752
    read( 4%) ->    (0/31) -> parse(23% 25% 28% 28% 29% 24% 25% 26% 25% 23%) ->   (21/22) -> process(96%)
0:01:20 INF [osm_pass1] -  nodes: [ 293M  11M/s ] 4.6G  ways: [    0    0/s ] rels: [    0    0/s ] blocks: [  36k 1.3k/s ]
    cpus: 4.1 gc:  3% heap: 1.3G/6.4G direct: 54M postGC: 442M hppc:  752
    read( 4%) ->    (0/31) -> parse(28% 31% 29% 31% 28% 29% 29% 30% 30% 26%) ->   (21/22) -> process(95%)
0:01:30 INF [osm_pass1] -  nodes: [ 422M  12M/s ] 5.9G  ways: [    0    0/s ] rels: [    0    0/s ] blocks: [  52k 1.5k/s ]
    cpus: 4.6 gc:  4% heap: 982M/6.4G direct: 54M postGC: 713M hppc:  752
    read( 4%) ->    (0/31) -> parse(30% 35% 33% 36% 36% 33% 34% 32% 32% 33%) ->   (21/22) -> process(94%)
0:01:40 INF [osm_pass1:process] - Finished nodes: 549,684,884 (10M/s) in 50s cpu:3m33s gc:1s avg:4.2
0:01:40 INF [osm_pass1] -  nodes: [ 549M  12M/s ] 7G    ways: [  77k 7.6k/s ] rels: [    0    0/s ] blocks: [  68k 1.5k/s ]
    cpus: 5.1 gc:  2% heap: 1.4G/6.4G direct: 54M postGC: 744M hppc:  752
    read( 5%) ->    (1/31) -> parse(36% 40% 37% 38% 36% 39% 36% 40% 39% 34%) ->   (21/22) -> process(93%)
0:01:48 INF [osm_pass1:process] - Finished ways: 31,787,492 (4.1M/s) in 8s cpu:1m20s avg:10.4
0:01:50 INF [osm_pass1:process] - Finished relations: 535,124 (346k/s) in 2s cpu:5s avg:3.5
0:01:50 INF [osm_pass1] -  nodes: [ 549M    0/s ] 7G    ways: [  31M 3.4M/s ] rels: [ 535k  58k/s ] blocks: [  72k  439/s ]
    cpus: 9.2 gc:  5% heap: 1.2G/6.4G direct: 54M postGC: 932M hppc:  20M
    read( -%) ->    (0/31) -> parse( -%  -%  -%  -%  -%  -%  -%  -%  -%  -%) ->    (0/22) -> process( -%)
0:01:50 DEB [osm_pass1] - Processed 72,752 blocks:
0:01:50 DEB [osm_pass1] -   nodes: 549,684,884 (10M/s) in 50s cpu:3m33s gc:1s avg:4.2
0:01:50 DEB [osm_pass1] -   ways: 31,787,492 (4.1M/s) in 8s cpu:1m20s avg:10.4
0:01:50 DEB [osm_pass1] -   relations: 535,124 (346k/s) in 2s cpu:5s avg:3.5
0:01:50 INF [osm_pass1] - Finished in 1m cpu:5m gc:2s avg:5
0:01:50 INF [osm_pass1] -   read     1x(4% 2s wait:57s)
0:01:50 INF [osm_pass1] -   parse   10x(37% 22s wait:37s)
0:01:50 INF [osm_pass1] -   process  1x(92% 55s wait:2s)
0:01:50 INF [osm_pass2] -
0:01:50 INF [osm_pass2] - Starting...
0:02:00 INF [osm_pass2] -  nodes: [  64M  12% 6.4M/s ] 7G    ways: [    0   0%    0/s ] rels: [    0   0%    0/s ] features: [  23M 194k/s ] 3G    blocks: [   8k  11%  800/s ]
    cpus: 11.1 gc:  2% heap: 1.7G/6.4G direct: 54M postGC: 786M relInfo:  20M mpGeoms: 277
    read(13%) ->    (9/22) -> process(93% 91% 93% 95% 94% 94% 93% 90% 91% 93% 95%) -> (6.4k/62k) -> write( 6%)
0:02:10 INF [osm_pass2] -  nodes: [ 137M  25% 7.2M/s ] 7G    ways: [    0   0%    0/s ] rels: [    0   0%    0/s ] features: [  24M  82k/s ] 3G    blocks: [  17k  24%  907/s ]
    cpus: 11.1 gc:  1% heap: 2G/6.4G direct: 54M postGC: 786M relInfo:  20M mpGeoms: 277
    read(16%) ->   (11/22) -> process(99% 98% 98% 98% 98% 98% 98% 97% 97% 98% 98%) -> (4.7k/62k) -> write( 2%)
0:02:20 INF [osm_pass2] -  nodes: [ 207M  38% 6.9M/s ] 7G    ways: [    0   0%    0/s ] rels: [    0   0%    0/s ] features: [  24M  77k/s ] 3G    blocks: [  25k  36%  871/s ]
    cpus: 11.2 gc:  2% heap: 1.6G/6.4G direct: 54M postGC: 818M relInfo:  20M mpGeoms: 277
    read(14%) ->    (8/22) -> process(97% 94% 95% 96% 96% 96% 97% 95% 94% 97% 97%) -> (7.6k/62k) -> write( 2%)
0:02:30 INF [osm_pass2] -  nodes: [ 278M  51% 7.1M/s ] 7G    ways: [    0   0%    0/s ] rels: [    0   0%    0/s ] features: [  25M  56k/s ] 3G    blocks: [  34k  48%  889/s ]
    cpus: 11.2 gc:  2% heap: 844M/6.4G direct: 54M postGC: 843M relInfo:  20M mpGeoms: 277
    read(16%) ->   (11/22) -> process(96% 95% 96% 97% 98% 98% 98% 97% 96% 97% 98%) -> (4.8k/62k) -> write( 2%)
0:02:40 INF [osm_pass2] -  nodes: [ 346M  63% 6.7M/s ] 7G    ways: [    0   0%    0/s ] rels: [    0   0%    0/s ] features: [  26M  73k/s ] 3G    blocks: [  43k  59%  842/s ]
    cpus: 11.3 gc:  2% heap: 1.5G/6.4G direct: 54M postGC: 881M relInfo:  20M mpGeoms: 277
    read(14%) ->    (9/22) -> process(94% 94% 93% 94% 96% 97% 98% 92% 91% 93% 95%) -> (5.3k/62k) -> write( 2%)
0:02:50 INF [osm_pass2] -  nodes: [ 417M  76% 7.1M/s ] 7G    ways: [    0   0%    0/s ] rels: [    0   0%    0/s ] features: [  26M  43k/s ] 3G    blocks: [  52k  72%  891/s ]
    cpus: 11.2 gc:  1% heap: 1.2G/6.4G direct: 54M postGC: 884M relInfo:  20M mpGeoms: 277
    read(17%) ->    (8/22) -> process(96% 97% 97% 96% 97% 96% 97% 96% 96% 96% 97%) ->  (5k/62k) -> write( 1%)
0:03:00 INF [osm_pass2] -  nodes: [ 492M  90% 7.4M/s ] 7G    ways: [    0   0%    0/s ] rels: [    0   0%    0/s ] features: [  26M 8.2k/s ] 3G    blocks: [  61k  85%  930/s ]
    cpus: 11 gc:  1% heap: 900M/6.4G direct: 54M postGC: 885M relInfo:  20M mpGeoms: 277
    read(16%) ->   (11/22) -> process(98% 98% 98% 98% 99% 98% 98% 98% 98% 98% 99%) -> (4.3k/62k) -> write( 0%)
0:03:08 DEB [osm_pass2:process] - Sorting long long multimap...
0:03:08 INF [osm_pass2:process] - Finished nodes: 549,684,884 (7M/s) in 1m18s cpu:14m32s gc:1s avg:11.2
0:03:08 DEB [osm_pass2:process] - Sorted long long multimap 0.3s cpu:3s avg:10
0:03:08 INF [osm_pass2:process] - madvise not available on this system - node location lookup may be slower when less free RAM is available outside the JVM
0:03:09 WAR [osm_pass2:process] - No GB polygon for inferring route network types
0:03:10 INF [osm_pass2] -  nodes: [ 549M 100% 5.6M/s ] 7G    ways: [ 105k   0%  10k/s ] rels: [    0   0%    0/s ] features: [  28M 129k/s ] 3G    blocks: [  68k  94%  711/s ]
    cpus: 11.2 gc:  1% heap: 1.9G/6.4G direct: 54M postGC: 954M relInfo:  17M mpGeoms: 15M
    read(12%) ->   (11/22) -> process(93% 91% 91% 91% 91% 90% 91% 91% 91% 92% 92%) -> (7.1k/62k) -> write( 4%)
0:03:12 WAR [osm_pass2:process] - No IE polygon for inferring route network types
0:03:20 INF [osm_pass2] -  nodes: [ 549M 100%    0/s ] 7G    ways: [ 1.1M   4%  99k/s ] rels: [    0   0%    0/s ] features: [  34M 609k/s ] 3G    blocks: [  68k  95%   12/s ]
    cpus: 11.4 gc:  4% heap: 1.8G/6.4G direct: 54M postGC: 971M relInfo:  17M mpGeoms: 44M
    read( 0%) ->   (11/22) -> process(91% 90% 88% 91% 90% 90% 91% 89% 87% 92% 88%) ->  (7k/62k) -> write(20%)
0:03:30 INF [osm_pass2] -  nodes: [ 549M 100%    0/s ] 7G    ways: [ 2.1M   7% 103k/s ] rels: [    0   0%    0/s ] features: [  36M 257k/s ] 4.3G  blocks: [  68k  95%   12/s ]
    cpus: 10.8 gc:  4% heap: 868M/6.4G direct: 54M postGC: 974M relInfo:  17M mpGeoms: 101M
    read( 0%) ->   (11/22) -> process(93% 95% 95% 95% 95% 95% 95% 93% 93% 95% 94%) ->  (5k/62k) -> write(10%)
0:03:40 INF [osm_pass2] -  nodes: [ 549M 100%    0/s ] 7G    ways: [ 3.2M  10% 107k/s ] rels: [    0   0%    0/s ] features: [  39M 248k/s ] 4.3G  blocks: [  69k  95%   13/s ]
    cpus: 10.6 gc:  4% heap: 843M/6.4G direct: 54M postGC: 911M relInfo:  17M mpGeoms: 164M
    read( 0%) ->   (11/22) -> process(93% 92% 92% 91% 91% 91% 92% 91% 92% 92% 91%) -> (4.2k/62k) -> write( 9%)
0:03:50 INF [osm_pass2] -  nodes: [ 549M 100%    0/s ] 7G    ways: [   4M  13%  84k/s ] rels: [    0   0%    0/s ] features: [  41M 245k/s ] 4.3G  blocks: [  69k  95%   10/s ]
    cpus: 10.8 gc:  4% heap: 2.2G/6.4G direct: 54M postGC: 910M relInfo:  17M mpGeoms: 230M
    read( 0%) ->   (11/22) -> process(95% 93% 94% 96% 95% 94% 94% 94% 92% 95% 96%) -> (6.1k/62k) -> write(10%)
0:04:00 INF [osm_pass2] -  nodes: [ 549M 100%    0/s ] 7G    ways: [ 4.9M  16%  84k/s ] rels: [    0   0%    0/s ] features: [  44M 247k/s ] 4.3G  blocks: [  69k  95%   10/s ]
    cpus: 10.7 gc:  4% heap: 1.5G/6.4G direct: 54M postGC: 905M relInfo:  17M mpGeoms: 280M
    read( 0%) ->   (11/22) -> process(95% 93% 96% 95% 95% 95% 95% 94% 93% 94% 96%) -> (6.2k/62k) -> write( 8%)
0:04:10 INF [osm_pass2] -  nodes: [ 549M 100%    0/s ] 7G    ways: [ 5.9M  19%  97k/s ] rels: [    0   0%    0/s ] features: [  47M 292k/s ] 4.3G  blocks: [  69k  95%   12/s ]
    cpus: 10.8 gc:  4% heap: 1.1G/6.4G direct: 54M postGC: 924M relInfo:  17M mpGeoms: 337M
    read( 0%) ->   (11/22) -> process(96% 95% 95% 96% 94% 95% 95% 95% 96% 96% 95%) -> (7.1k/62k) -> write( 9%)
0:04:20 INF [osm_pass2] -  nodes: [ 549M 100%    0/s ] 7G    ways: [ 7.1M  23% 125k/s ] rels: [    0   0%    0/s ] features: [  50M 367k/s ] 5.5G  blocks: [  69k  96%   15/s ]
    cpus: 10.8 gc:  4% heap: 1.4G/6.4G direct: 54M postGC: 922M relInfo:  17M mpGeoms: 389M
    read( 0%) ->   (11/22) -> process(94% 94% 94% 94% 94% 94% 94% 93% 94% 93% 94%) -> (5.6k/62k) -> write(14%)
0:04:30 INF [osm_pass2] -  nodes: [ 549M 100%    0/s ] 7G    ways: [ 7.7M  24%  59k/s ] rels: [    0   0%    0/s ] features: [  52M 177k/s ] 5.5G  blocks: [  69k  96%    7/s ]
    cpus: 10.8 gc:  3% heap: 1.5G/6.4G direct: 54M postGC: 925M relInfo:  17M mpGeoms: 422M
    read( 0%) ->   (11/22) -> process(97% 96% 96% 96% 96% 95% 96% 96% 96% 96% 96%) -> (7.1k/62k) -> write( 4%)
0:04:40 INF [osm_pass2] -  nodes: [ 549M 100%    0/s ] 7G    ways: [ 8.1M  26%  41k/s ] rels: [    0   0%    0/s ] features: [  53M 118k/s ] 5.5G  blocks: [  69k  96%    5/s ]
    cpus: 10.8 gc:  2% heap: 1.2G/6.4G direct: 54M postGC: 955M relInfo:  17M mpGeoms: 442M
    read( 0%) ->   (11/22) -> process(97% 96% 97% 97% 97% 97% 95% 97% 96% 96% 97%) -> (5.7k/62k) -> write( 5%)
0:04:50 INF [osm_pass2] -  nodes: [ 549M 100%    0/s ] 7G    ways: [ 8.7M  27%  52k/s ] rels: [    0   0%    0/s ] features: [  55M 166k/s ] 5.5G  blocks: [  69k  96%    6/s ]
    cpus: 11 gc:  3% heap: 1.6G/6.4G direct: 54M postGC: 999M relInfo:  17M mpGeoms: 459M
    read( 0%) ->   (11/22) -> process(98% 97% 98% 98% 98% 98% 98% 98% 97% 98% 98%) -> (6.4k/62k) -> write( 4%)
0:05:00 INF [osm_pass2] -  nodes: [ 549M 100%    0/s ] 7G    ways: [ 9.7M  31% 105k/s ] rels: [    0   0%    0/s ] features: [  58M 273k/s ] 5.5G  blocks: [  69k  96%   13/s ]
    cpus: 10.8 gc:  4% heap: 1G/6.4G direct: 54M postGC: 1G relInfo:  17M mpGeoms: 526M
    read( 0%) ->   (11/22) -> process(94% 94% 93% 94% 93% 94% 94% 93% 94% 92% 93%) -> (6.1k/62k) -> write(10%)
0:05:11 INF [osm_pass2] -  nodes: [ 549M 100%    0/s ] 7G    ways: [  10M  34%  87k/s ] rels: [    0   0%    0/s ] features: [  61M 274k/s ] 5.5G  blocks: [  70k  96%   11/s ]
    cpus: 10.6 gc:  4% heap: 1.4G/6.4G direct: 54M postGC: 1G relInfo:  17M mpGeoms: 609M
    read( 0%) ->   (11/22) -> process(93% 94% 95% 95% 93% 94% 95% 93% 93% 95% 92%) -> (5.1k/62k) -> write( 8%)
0:05:21 INF [osm_pass2] -  nodes: [ 549M 100%    0/s ] 7G    ways: [  11M  36%  81k/s ] rels: [    0   0%    0/s ] features: [  63M 225k/s ] 5.5G  blocks: [  70k  96%   10/s ]
    cpus: 10.4 gc:  4% heap: 2G/6.4G direct: 54M postGC: 1G relInfo:  17M mpGeoms: 721M
    read( 0%) ->   (11/22) -> process(91% 92% 93% 92% 94% 91% 93% 92% 91% 92% 92%) -> (4.8k/62k) -> write( 8%)
0:05:31 INF [osm_pass2] -  nodes: [ 549M 100%    0/s ] 7G    ways: [  12M  39%  96k/s ] rels: [    0   0%    0/s ] features: [  65M 258k/s ] 6.8G  blocks: [  70k  97%   11/s ]
    cpus: 10.7 gc:  4% heap: 1.7G/6.4G direct: 54M postGC: 1G relInfo:  17M mpGeoms: 804M
    read( 0%) ->   (11/22) -> process(95% 94% 94% 95% 94% 94% 93% 93% 95% 94% 95%) -> (4.8k/62k) -> write( 9%)
0:05:41 INF [osm_pass2] -  nodes: [ 549M 100%    0/s ] 7G    ways: [  13M  43% 107k/s ] rels: [    0   0%    0/s ] features: [  68M 310k/s ] 6.8G  blocks: [  70k  97%   13/s ]
    cpus: 10.7 gc:  4% heap: 1G/6.4G direct: 54M postGC: 1G relInfo:  17M mpGeoms: 854M
    read( 0%) ->   (11/22) -> process(94% 94% 94% 94% 93% 93% 95% 94% 94% 94% 94%) -> (5.5k/62k) -> write(10%)
0:05:51 INF [osm_pass2] -  nodes: [ 549M 100%    0/s ] 7G    ways: [  14M  45%  72k/s ] rels: [    0   0%    0/s ] features: [  71M 241k/s ] 6.8G  blocks: [  70k  97%    8/s ]
    cpus: 10.8 gc:  4% heap: 1.1G/6.4G direct: 54M postGC: 1G relInfo:  17M mpGeoms: 900M
    read( 0%) ->   (11/22) -> process(95% 94% 95% 93% 95% 93% 94% 95% 95% 94% 95%) -> (5.2k/62k) -> write( 9%)
0:06:01 INF [osm_pass2] -  nodes: [ 549M 100%    0/s ] 7G    ways: [  15M  49% 127k/s ] rels: [    0   0%    0/s ] features: [  75M 432k/s ] 6.8G  blocks: [  70k  97%   16/s ]
    cpus: 11 gc:  4% heap: 1.5G/6.4G direct: 54M postGC: 1G relInfo:  17M mpGeoms: 939M
    read( 0%) ->   (11/22) -> process(95% 95% 96% 95% 96% 96% 95% 95% 96% 96% 95%) -> (5.3k/62k) -> write(15%)
0:06:11 INF [osm_pass2] -  nodes: [ 549M 100%    0/s ] 7G    ways: [  16M  53% 121k/s ] rels: [    0   0%    0/s ] features: [  78M 313k/s ] 6.8G  blocks: [  70k  97%   15/s ]
    cpus: 10.9 gc:  5% heap: 1.8G/6.4G direct: 54M postGC: 1G relInfo:  17M mpGeoms: 998M
    read( 0%) ->   (11/22) -> process(94% 93% 94% 94% 95% 94% 94% 94% 94% 94% 95%) -> (3.7k/62k) -> write(10%)
0:06:21 INF [osm_pass2] -  nodes: [ 549M 100%    0/s ] 7G    ways: [  18M  57% 140k/s ] rels: [    0   0%    0/s ] features: [  82M 344k/s ] 7.9G  blocks: [  70k  98%   17/s ]
    cpus: 10.9 gc:  4% heap: 1.3G/6.4G direct: 54M postGC: 1G relInfo:  17M mpGeoms: 1G
    read( 0%) ->   (11/22) -> process(95% 95% 95% 95% 96% 96% 95% 95% 94% 95% 94%) -> (7.3k/62k) -> write(12%)
0:06:31 INF [osm_pass2] -  nodes: [ 549M 100%    0/s ] 7G    ways: [  19M  61% 134k/s ] rels: [    0   0%    0/s ] features: [  85M 335k/s ] 7.9G  blocks: [  71k  98%   16/s ]
    cpus: 10.7 gc:  4% heap: 1.4G/6.4G direct: 54M postGC: 1G relInfo:  17M mpGeoms: 1G
    read( 0%) ->   (11/22) -> process(92% 91% 92% 93% 92% 92% 92% 93% 93% 93% 93%) -> (4.9k/62k) -> write(12%)
0:06:41 INF [osm_pass2] -  nodes: [ 549M 100%    0/s ] 7G    ways: [  21M  66% 156k/s ] rels: [    0   0%    0/s ] features: [  89M 394k/s ] 7.9G  blocks: [  71k  98%   19/s ]
    cpus: 10.9 gc:  4% heap: 1.8G/6.4G direct: 54M postGC: 1G relInfo:  17M mpGeoms: 1G
    read( 0%) ->   (11/22) -> process(96% 95% 96% 95% 96% 94% 95% 95% 95% 95% 96%) -> (5.4k/62k) -> write(10%)
0:06:51 INF [osm_pass2] -  nodes: [ 549M 100%    0/s ] 7G    ways: [  22M  72% 168k/s ] rels: [    0   0%    0/s ] features: [  93M 411k/s ] 7.9G  blocks: [  71k  98%   21/s ]
    cpus: 11 gc:  4% heap: 1G/6.4G direct: 54M postGC: 1G relInfo:  17M mpGeoms: 1.1G
    read( 0%) ->   (11/22) -> process(94% 95% 95% 95% 95% 95% 94% 94% 94% 95% 95%) -> (5.3k/62k) -> write(11%)
0:07:01 INF [osm_pass2] -  nodes: [ 549M 100%    0/s ] 7G    ways: [  24M  76% 133k/s ] rels: [    0   0%    0/s ] features: [  97M 346k/s ] 9.1G  blocks: [  71k  99%   16/s ]
    cpus: 10.9 gc:  4% heap: 1.4G/6.4G direct: 54M postGC: 1G relInfo:  17M mpGeoms: 1.1G
    read( 0%) ->   (11/22) -> process(95% 95% 94% 94% 95% 94% 94% 95% 94% 94% 95%) ->  (4k/62k) -> write(13%)
0:07:11 INF [osm_pass2] -  nodes: [ 549M 100%    0/s ] 7G    ways: [  25M  79% 105k/s ] rels: [    0   0%    0/s ] features: [ 100M 335k/s ] 9.1G  blocks: [  71k  99%   12/s ]
    cpus: 10.8 gc:  4% heap: 2.1G/6.4G direct: 54M postGC: 1G relInfo:  17M mpGeoms: 1.1G
    read( 0%) ->   (11/22) -> process(95% 94% 95% 95% 95% 95% 94% 94% 94% 95% 95%) -> (6.5k/62k) -> write( 8%)
0:07:21 INF [osm_pass2] -  nodes: [ 549M 100%    0/s ] 7G    ways: [  25M  81%  73k/s ] rels: [    0   0%    0/s ] features: [ 103M 256k/s ] 9.1G  blocks: [  71k  99%    9/s ]
    cpus: 10.6 gc:  4% heap: 1.4G/6.4G direct: 54M postGC: 1G relInfo:  17M mpGeoms: 1.2G
    read( 0%) ->   (11/22) -> process(94% 94% 93% 93% 94% 95% 94% 93% 96% 93% 93%) -> (3.3k/62k) -> write( 7%)
0:07:31 INF [osm_pass2] -  nodes: [ 549M 100%    0/s ] 7G    ways: [  26M  84%  82k/s ] rels: [    0   0%    0/s ] features: [ 105M 256k/s ] 9.1G  blocks: [  72k  99%   10/s ]
    cpus: 10.8 gc:  4% heap: 1.8G/6.4G direct: 54M postGC: 1G relInfo:  17M mpGeoms: 1.2G
    read( 0%) ->   (11/22) -> process(93% 94% 94% 93% 94% 94% 96% 94% 94% 94% 95%) -> (4.9k/62k) -> write(10%)
0:07:41 INF [osm_pass2] -  nodes: [ 549M 100%    0/s ] 7G    ways: [  27M  87%  94k/s ] rels: [    0   0%    0/s ] features: [ 108M 272k/s ] 9.1G  blocks: [  72k  99%   11/s ]
    cpus: 10.8 gc:  4% heap: 1.5G/6.4G direct: 54M postGC: 1G relInfo:  17M mpGeoms: 1.3G
    read( 0%) ->   (11/22) -> process(94% 93% 95% 94% 94% 94% 94% 94% 93% 95% 94%) -> (4.8k/62k) -> write( 8%)
0:07:51 INF [osm_pass2] -  nodes: [ 549M 100%    0/s ] 7G    ways: [  28M  90%  84k/s ] rels: [    0   0%    0/s ] features: [ 111M 264k/s ] 9.1G  blocks: [  72k  99%   10/s ]
    cpus: 10.8 gc:  4% heap: 1G/6.4G direct: 54M postGC: 1.1G relInfo:  17M mpGeoms: 1.3G
    read( 0%) ->   (11/22) -> process(94% 94% 94% 94% 94% 95% 94% 93% 93% 94% 94%) -> (6.3k/62k) -> write(10%)
0:08:01 INF [osm_pass2] -  nodes: [ 549M 100%    0/s ] 7G    ways: [  29M  92%  82k/s ] rels: [    0   0%    0/s ] features: [ 113M 228k/s ] 10G   blocks: [  72k  99%   10/s ]
    cpus: 10.7 gc:  4% heap: 2.2G/6.4G direct: 54M postGC: 1.1G relInfo:  17M mpGeoms: 1.4G
    read( 0%) ->   (11/22) -> process(95% 93% 93% 95% 95% 93% 95% 94% 95% 95% 94%) -> (5.9k/62k) -> write(10%)
0:08:11 INF [osm_pass2] -  nodes: [ 549M 100%    0/s ] 7G    ways: [  30M  95%  85k/s ] rels: [    0   0%    0/s ] features: [ 115M 207k/s ] 10G   blocks: [  72k 100%   10/s ]
    cpus: 10.5 gc:  4% heap: 1G/6.4G direct: 54M postGC: 1.1G relInfo:  17M mpGeoms: 1.5G
    read( 0%) ->   (11/22) -> process(93% 91% 92% 92% 93% 93% 93% 93% 92% 92% 92%) -> (6.7k/62k) -> write( 6%)
0:08:21 INF [osm_pass2] -  nodes: [ 549M 100%    0/s ] 7G    ways: [  31M  98%  83k/s ] rels: [    0   0%    0/s ] features: [ 117M 227k/s ] 10G   blocks: [  72k 100%   10/s ]
    cpus: 10.8 gc:  4% heap: 2.1G/6.4G direct: 54M postGC: 1.1G relInfo:  17M mpGeoms: 1.5G
    read( 0%) ->   (11/22) -> process(94% 93% 94% 95% 94% 95% 95% 93% 93% 94% 95%) -> (5.4k/62k) -> write( 9%)
0:08:30 INF [osm_pass2:process] - Finished ways: 31,787,492 (98k/s) in 5m22s cpu:57m58s gc:13s avg:10.8
0:08:30 INF [osm_pass2:process] - madvise not available on this system - node location lookup may be slower when less free RAM is available outside the JVM
0:08:31 INF [osm_pass2] -  nodes: [ 549M 100%    0/s ] 7G    ways: [  31M 100%  75k/s ] rels: [ 1.4k   0%  142/s ] features: [ 120M 236k/s ] 10G   blocks: [  72k 100%    9/s ]
    cpus: 10.2 gc:  4% heap: 2.3G/6.4G direct: 54M postGC: 1.1G relInfo:  17M mpGeoms: 1.5G
    read( 0%) ->   (11/22) -> process(86% 86% 91% 86% 83% 87% 85% 89% 92% 85% 84%) -> (5.3k/62k) -> write( 7%)
0:08:41 INF [osm_pass2] -  nodes: [ 549M 100%    0/s ] 7G    ways: [  31M 100%    0/s ] rels: [  17k   3% 1.6k/s ] features: [ 120M  26k/s ] 10G   blocks: [  72k 100%    0/s ]
    cpus: 11.2 gc:  2% heap: 2.1G/6.4G direct: 54M postGC: 1.1G relInfo:  17M mpGeoms: 1.5G
    read( 0%) ->   (11/22) -> process(97% 92% 94% 96% 96% 97% 96% 97% 95% 96% 96%) -> (5.3k/62k) -> write( 1%)
0:08:51 INF [osm_pass2] -  nodes: [ 549M 100%    0/s ] 7G    ways: [  31M 100%    0/s ] rels: [  31k   6% 1.3k/s ] features: [ 120M  20k/s ] 10G   blocks: [  72k 100%    0/s ]
    cpus: 11 gc:  5% heap: 1.2G/6.4G direct: 54M postGC: 1.3G relInfo:  17M mpGeoms: 1.5G
    read( 0%) ->   (11/22) -> process(94% 94% 94% 94% 94% 94% 94% 94% 94% 94% 94%) -> (4.3k/62k) -> write( 1%)
0:09:01 INF [osm_pass2] -  nodes: [ 549M 100%    0/s ] 7G    ways: [  31M 100%    0/s ] rels: [  43k   8% 1.2k/s ] features: [ 120M  19k/s ] 10G   blocks: [  72k 100%    0/s ]
    cpus: 10.9 gc:  6% heap: 1.7G/6.4G direct: 54M postGC: 1.5G relInfo:  17M mpGeoms: 1.5G
    read( 0%) ->   (11/22) -> process(94% 94% 94% 93% 94% 94% 94% 94% 94% 94% 94%) -> (6.9k/62k) -> write( 1%)
0:09:11 INF [osm_pass2] -  nodes: [ 549M 100%    0/s ] 7G    ways: [  31M 100%    0/s ] rels: [  56k  10% 1.2k/s ] features: [ 121M  24k/s ] 10G   blocks: [  72k 100%    0/s ]
    cpus: 10.9 gc:  4% heap: 1.4G/6.4G direct: 54M postGC: 1.5G relInfo:  17M mpGeoms: 1.5G
    read( 0%) ->   (11/22) -> process(95% 95% 95% 95% 95% 95% 95% 95% 95% 95% 95%) -> (5.9k/62k) -> write( 2%)
0:09:21 INF [osm_pass2] -  nodes: [ 549M 100%    0/s ] 7G    ways: [  31M 100%    0/s ] rels: [  67k  13% 1.1k/s ] features: [ 121M  20k/s ] 10G   blocks: [  72k 100%   <1/s ]
    cpus: 10.7 gc:  6% heap: 2G/6.4G direct: 54M postGC: 1.2G relInfo:  17M mpGeoms: 1.5G
    read( 0%) ->   (11/22) -> process(92% 92% 91% 92% 91% 91% 91% 91% 91% 92% 92%) -> (5.8k/62k) -> write( 1%)
^C

us:

java -Xmx6g -jar planetiler_0.8.2.jar --force --keep_unzipped=true --natural_earth_keep_unzipped=true --maxzoom=14 --osm_path=..\..\datasources\regions\us-latest.osm.pbf --output=us.pmtiles --only_layers=landuse,water,transportation,landcover,place,boundary,poi,transportation_name,building,housenumber,waterway
0:00:00 DEB - argument: config=null (path to config file)
0:00:00 DEB - argument: download_dir=data\sources (download directory)
0:00:00 DEB - argument: area=monaco (name of the extract to download if osm_url/osm_path not specified (i.e. 'monaco' 'rhode island' 'australia' or 'planet'))
0:00:00 INF - argument: stats=use in-memory stats
0:00:00 DEB - argument: madvise=true (default value for whether to use linux madvise(random) to improve memory-mapped read performance for temporary storage)
0:00:00 DEB - argument: storage=mmap (default storage type for temporary data, one of [ram, mmap, direct])
0:00:00 DEB - argument: threads=12 (num threads)
0:00:00 DEB - argument: write_threads=1 (number of threads to use when writing temp features)
0:00:00 DEB - argument: process_threads=11 (number of threads to use when processing input features)
0:00:00 DEB - argument: bounds=null (bounds)
0:00:00 DEB - argument: polygon=null (a .poly file that limits output to tiles intersecting the shape)
0:00:00 DEB - argument: minzoom=0 (minimum zoom level)
0:00:00 DEB - argument: maxzoom=14 (maximum zoom level up to 15)
0:00:00 DEB - argument: render_maxzoom=14 (maximum rendering zoom level up to 15)
0:00:00 DEB - argument: tmpdir=data\tmp (temp directory)
0:00:00 DEB - argument: feature_read_threads=1 (number of threads to use when reading features at tile write time)
0:00:00 DEB - argument: tile_write_threads=1 (number of threads used to write tiles - only supported by [files, csv, tsv, proto, pbf, json])
0:00:00 DEB - argument: loginterval=10 seconds (time between logs)
0:00:00 DEB - argument: force=true (overwriting output file and ignore disk/RAM warnings)
0:00:00 DEB - argument: append=false (append to the output file - only supported by [files, csv, tsv, proto, pbf, json])
0:00:00 DEB - argument: compress_temp=false (compress temporary feature storage (uses more CPU, but less disk space))
0:00:00 DEB - argument: mmap_temp=true (use memory-mapped IO for temp feature files)
0:00:00 DEB - argument: sort_max_readers=6 (maximum number of concurrent read threads to use when sorting chunks)
0:00:00 DEB - argument: sort_max_writers=6 (maximum number of concurrent write threads to use when sorting chunks)
0:00:00 DEB - argument: nodemap_type=sparsearray (type of node location map, one of [noop, sortedtable, sparsearray, array])
0:00:00 DEB - argument: nodemap_storage=mmap (storage for node location map, one of [ram, mmap, direct])
0:00:00 DEB - argument: nodemap_madvise=true (use linux madvise(random) for node locations)
0:00:00 DEB - argument: multipolygon_geometry_storage=mmap (storage for multipolygon geometries, one of [ram, mmap, direct])
0:00:00 DEB - argument: multipolygon_geometry_madvise=true (use linux madvise(random) for temporary multipolygon geometry storage)
0:00:00 DEB - argument: http_user_agent=Planetiler downloader (https://github.com/onthegomap/planetiler) (User-Agent header to set when downloading files over HTTP)
0:00:00 DEB - argument: http_timeout=30 seconds (Timeout to use when downloading files over HTTP)
0:00:00 DEB - argument: http_retries=1 (Retries to use when downloading files over HTTP)
0:00:00 DEB - argument: http_retry_wait=5 seconds (How long to wait before retrying HTTP request)
0:00:00 DEB - argument: download_chunk_size_mb=100 (Size of file chunks to download in parallel in megabytes)
0:00:00 DEB - argument: download_threads=1 (Number of parallel threads to use when downloading each file)
0:00:00 DEB - argument: download_max_bandwidth= (Maximum bandwidth to consume when downloading files in units mb/s, mbps, kbps, etc.)
0:00:00 DEB - argument: min_feature_size_at_max_zoom=0.0625 (Default value for the minimum size in tile pixels of features to emit at the maximum zoom level to allow for overzooming)
0:00:00 DEB - argument: min_feature_size=1.0 (Default value for the minimum size in tile pixels of features to emit below the maximum zoom level)
0:00:00 DEB - argument: simplify_tolerance_at_max_zoom=0.0625 (Default value for the tile pixel tolerance to use when simplifying features at the maximum zoom level to allow for overzooming)
0:00:00 DEB - argument: simplify_tolerance=0.1 (Default value for the tile pixel tolerance to use when simplifying features below the maximum zoom level)
0:00:00 DEB - argument: osm_lazy_reads=true (Read OSM blocks from disk in worker threads)
0:00:00 DEB - argument: skip_filled_tiles=false (Skip writing tiles containing only polygon fills to the output)
0:00:00 DEB - argument: tile_warning_size_mb=1.0 (Maximum size in megabytes of a tile to emit a warning about)
0:00:00 DEB - argument: color=null (Color the terminal output)
0:00:00 DEB - argument: keep_unzipped=true (keep unzipped sources by default after reading)
0:00:00 DEB - argument: tile_compression=gzip (the tile compression, one of [none, gzip])
0:00:00 DEB - argument: output_layerstats=false (output a tsv.gz file for each tile/layer size)
0:00:00 DEB - argument: debug_url=https://onthegomap.github.io/planetiler-demo/#{z}/{lat}/{lon} (debug url to use for displaying tiles with {z} {lat} {lon} placeholders)
0:00:00 DEB - argument: tile_weights=data\tile_weights.tsv.gz (tsv.gz file with columns z,x,y,loads to generate weighted average tile size stat)
0:00:00 DEB - argument: max_point_buffer=Infinity (Max tile pixels to include points outside tile bounds. Set to a lower value to reduce tile size for clients that handle label collisions across tiles (most web and native clients). NOTE: Do not reduce if you need to support raster tile rendering)
0:00:00 DEB - argument: log_jts_exceptions=false (Emit verbose details to debug JTS geometry errors)
0:00:00 DEB - argument: feature_source_id_multiplier=10 (Set vector tile feature IDs to (featureId * thisValue) + sourceId where sourceId is 1 for OSM nodes, 2 for ways, 3 for relations, and 0 for other sources. Set to false to disable.)
0:00:00 DEB - argument: only_download=false (download source data then exit)
0:00:00 DEB - argument: tests=null (run test cases in a yaml then quit)
0:00:00 DEB - argument: download=false (download sources)
0:00:00 DEB - argument: refresh_sources=false (download new version of source files if they have changed)
0:00:00 DEB - argument: download_osm_tile_weights=false (download OSM tile weights file)
0:00:00 DEB - argument: temp_nodes=data\tmp\node.db (temp node db location)
0:00:00 DEB - argument: temp_multipolygons=data\tmp\multipolygon.db (temp multipolygon db location)
0:00:00 DEB - argument: temp_features=data\tmp\feature.db (temp feature db location)
0:00:00 DEB - argument: osm_parse_node_bounds=false (parse bounds from OSM nodes instead of header)
0:00:00 DEB - argument: only_fetch_wikidata=false (fetch wikidata translations then quit)
0:00:00 DEB - argument: fetch_wikidata=false (fetch wikidata translations then continue)
0:00:00 DEB - argument: use_wikidata=true (use wikidata translations)
0:00:00 DEB - argument: wikidata_cache=data\sources\wikidata_names.json (wikidata cache file)
0:00:00 DEB - argument: lake_centerlines_path=data\sources\lake_centerline.shp.zip (lake_centerlines shapefile path)
0:00:00 DEB - argument: refresh_lake_centerlines=false (Download new version of lake_centerlines if changed)
0:00:00 DEB - argument: free_lake_centerlines_after_read=false (delete lake_centerlines input file after reading to make space for output (reduces peak disk usage))
0:00:00 DEB - argument: water_polygons_path=data\sources\water-polygons-split-3857.zip (water_polygons shapefile path)
0:00:00 DEB - argument: refresh_water_polygons=false (Download new version of water_polygons if changed)
0:00:00 DEB - argument: free_water_polygons_after_read=false (delete water_polygons input file after reading to make space for output (reduces peak disk usage))
0:00:00 DEB - argument: natural_earth_path=data\sources\natural_earth_vector.sqlite.zip (natural_earth sqlite db path)
0:00:00 DEB - argument: refresh_natural_earth=false (Download new version of natural_earth if changed)
0:00:00 DEB - argument: free_natural_earth_after_read=false (delete natural_earth input file after reading to make space for output (reduces peak disk usage))
0:00:00 DEB - argument: natural_earth_keep_unzipped=true (keep unzipped natural_earth after reading)
0:00:00 DEB - argument: osm_path=..\..\datasources\regions\us-latest.osm.pbf (osm OSM input file path)
0:00:00 DEB - argument: refresh_osm=false (Download new version of osm if changed)
0:00:00 DEB - argument: free_osm_after_read=false (delete osm input file after reading to make space for output (reduces peak disk usage))
0:00:00 INF - madvise not available on this system
0:00:00 DEB - argument: output=us.pmtiles (output tile archive path)
0:00:00 DEB - argument: version=false (show version then exit)
0:00:00 INF - Planetiler build git hash: a74f274c3162cd547c9b19a9a5e8705982a61755
0:00:00 INF - Planetiler build version: 0.8.2
0:00:00 INF - Planetiler build timestamp: 2024-07-02T11:38:28.480Z
0:00:00 DEB - argument: transliterate=true (attempt to transliterate latin names)
0:00:00 DEB - argument: languages=am,ar,az,be,bg,bn,br,bs,ca,co,cs,cy,da,de,el,en,eo,es,et,eu,fa,fi,fr,fy,ga,gd,he,hi,hr,hu,hy,id,is,it,ja,ja_kana,ja_rm,ja-Latn,ja-Hira,ka,kk,kn,ko,ko-Latn,ku,la,lb,lt,lv,mk,mt,ml,nl,no,oc,pa,pnb,pl,pt,rm,ro,ru,sk,sl,sq,sr,sr-Latn,sv,ta,te,th,tr,uk,ur,vi,zh,zh-Hant,zh-Hans (languages to use)
0:00:00 DEB - argument: only_layers=landuse,water,transportation,landcover,place,boundary,poi,transportation_name,building,housenumber,waterway (Include only certain layers)
0:00:00 DEB - argument: exclude_layers= (Exclude certain layers)
0:00:00 DEB - argument: boundary_country_names=true (boundary layer: add left/right codes of neighboring countries)
0:00:00 DEB - argument: boundary_osm_only=false (boundary layer: only use OSM, even at low zoom levels)
0:00:00 DEB - argument: transportation_z13_paths=false (transportation(_name) layer: show all paths on z13)
0:00:00 DEB - argument: building_merge_z13=true (building layer: merge nearby buildings at z13)
0:00:00 DEB - argument: transportation_name_brunnel=false (transportation_name layer: set to false to omit brunnel and help merge long highways)
0:00:00 DEB - argument: transportation_name_size_for_shield=false (transportation_name layer: allow road names on shorter segments (ie. they will have a shield))
0:00:00 DEB - argument: transportation_name_limit_merge=false (transportation_name layer: limit merge so we don't combine different relations to help merge long highways)
0:00:00 DEB - argument: transportation_name_minor_refs=false (transportation_name layer: include name and refs from minor road networks if not present on a way)
0:00:00 DEB - argument: help=false (show arguments then exit)
0:00:00 DEB - argument: layer_stats=G:\project\_bin\planetiler\us.pmtiles.layerstats.tsv.gz (layer stats output path)
0:00:00 INF - Building OpenMapTilesProfile profile into file:///G:/project/_bin/planetiler/us.pmtiles in these phases:
0:00:00 INF -   lake_centerlines: Process features in data\sources\lake_centerline.shp.zip
0:00:00 INF -   water_polygons: Process features in data\sources\water-polygons-split-3857.zip
0:00:00 INF -   natural_earth: Process features in data\sources\natural_earth_vector.sqlite.zip
0:00:00 INF -   osm_pass1: Pre-process OpenStreetMap input (store node locations then relation members)
0:00:00 INF -   osm_pass2: Process OpenStreetMap nodes, ways, then relations
0:00:00 INF -   sort: Sort rendered features by tile ID
0:00:00 INF -   archive: Encode each tile and write to TileArchiveConfig[format=PMTILES, scheme=FILE, uri=file:///G:/project/_bin/planetiler/us.pmtiles, options={}]
0:00:00 INF - no wikidata translations found, run with --fetch-wikidata to download
0:00:00 INF - ⚠️️ 47G storage on (G:) requested for read phase disk, 50G available
0:00:00 INF -    - 10G used for temporary node location cache
0:00:00 INF -    - 1.6G used for temporary multipolygon geometry cache
0:00:00 INF -    - 35G used for temporary feature storage
0:00:00 WAR - ❌️ 52G storage on (G:) requested for write phase disk, 50G available
0:00:00 WAR -    - 35G used for temporary feature storage
0:00:00 WAR -    - 17G used for archive output
0:00:00 DEB - Γ£ô 12G storage on (G:) requested for read phase, 50G available
0:00:00 DEB -  - 10G used for sparsearray node location cache
0:00:00 DEB -  - 1.6G used for multipolygon way geometries
0:00:00 DEB - Γ£ô 3.4G JVM heap requested for read phase, 6.4G available
0:00:00 DEB -  - 300M used for sparsearray node location in-memory index
0:00:00 DEB -  - 3.1G used for temporary profile storage
0:00:00 DEB - Γ£ô 12G temporary files and 27G of free memory for OS to cache them
0:00:00 INF - Setting map bounds from input: Env[-180.0 : 180.0, 15.92097 : 72.98845]
0:00:00 DEB - argument: archive_name=OpenMapTiles ('name' attribute for tileset metadata)
0:00:00 DEB - argument: archive_description=A tileset showcasing all layers in OpenMapTiles. https://openmaptiles.org ('description' attribute for tileset metadata)
0:00:00 DEB - argument: archive_attribution=<a href="https://www.openmaptiles.org/" target="_blank">&copy; OpenMapTiles</a> <a href="https://www.openstreetmap.org/copyright" target="_blank">&copy; OpenStreetMap contributors</a> ('attribution' attribute for tileset metadata)
0:00:00 DEB - argument: archive_version=3.15.0 ('version' attribute for tileset metadata)
0:00:00 DEB - argument: archive_type=baselayer ('type' attribute for tileset metadata)
0:00:00 DEB - argument: archive_format=pbf ('format' attribute for tileset metadata)
0:00:00 INF - Using merge sort feature map, chunk size=2000mb max workers=12
0:00:00 INF [lake_centerlines] - Skipping since profile does not use it
0:00:00 INF [water_polygons] -
0:00:00 INF [water_polygons] - Starting...
0:00:09 INF [water_polygons:write] - madvise not available on this system to speed up temporary feature IO.
0:00:19 INF [water_polygons] -  read: [  424   8%   42/s ] write: [ 1.7M 176k/s ] 2G
    cpus: 11.6 gc:  7% heap: 1.5G/6.4G direct: 54M postGC: 1.3G
    ->     (0/3) -> read(56%) ->   (1k/1k) -> process(75% 84% 82% 74% 69% 76% 76% 78% 79% 73% 71%) -> (5.3k/62k) -> write( 2%)
0:00:30 INF [water_polygons] -  read: [  834  17%   40/s ] write: [ 3.5M 172k/s ] 2G
    cpus: 11.2 gc:  6% heap: 2.5G/6.4G direct: 54M postGC: 1.5G
    ->     (0/3) -> read(15%) ->   (1k/1k) -> process(93% 93% 92% 93% 93% 92% 92% 93% 93% 92% 92%) -> (4.7k/62k) -> write( 2%)
0:00:40 INF [water_polygons] -  read: [ 1.2k  25%   39/s ] write: [ 5.2M 175k/s ] 2G
    cpus: 11.3 gc:  6% heap: 1.8G/6.4G direct: 54M postGC: 1.2G
    ->     (0/3) -> read(17%) ->   (1k/1k) -> process(92% 89% 90% 91% 90% 92% 91% 90% 89% 91% 92%) -> (4.4k/62k) -> write( 3%)
0:00:50 INF [water_polygons] -  read: [ 1.6k  32%   39/s ] write: [ 6.7M 143k/s ] 2G
    cpus: 11.1 gc:  5% heap: 1.6G/6.4G direct: 54M postGC: 1.6G
    ->     (0/3) -> read(13%) ->   (1k/1k) -> process(95% 95% 95% 95% 95% 95% 95% 95% 95% 95% 95%) -> (4.4k/62k) -> write( 2%)
0:01:00 INF [water_polygons] -  read: [   2k  40%   38/s ] write: [ 8.3M 167k/s ] 2G
    cpus: 11.2 gc:  5% heap: 1.5G/6.4G direct: 54M postGC: 1.5G
    ->     (0/3) -> read(14%) ->   (1k/1k) -> process(94% 93% 94% 93% 93% 94% 94% 94% 93% 92% 94%) -> (4.5k/62k) -> write( 3%)
0:01:10 INF [water_polygons] -  read: [ 2.3k  47%   34/s ] write: [ 9.9M 151k/s ] 2G
    cpus: 11.2 gc:  6% heap: 2.8G/6.4G direct: 54M postGC: 1.7G
    ->     (0/3) -> read(14%) ->   (1k/1k) -> process(94% 94% 94% 94% 94% 95% 95% 93% 95% 94% 94%) -> (5.1k/62k) -> write( 2%)
0:01:20 INF [water_polygons] -  read: [ 2.7k  56%   43/s ] write: [  11M 167k/s ] 2G
    cpus: 11 gc:  5% heap: 2.3G/6.4G direct: 54M postGC: 2G
    ->     (0/3) -> read( 9%) ->   (1k/1k) -> process(95% 95% 95% 95% 95% 95% 95% 95% 95% 94% 95%) -> (6.2k/62k) -> write( 3%)
0:01:30 INF [water_polygons] -  read: [ 3.1k  63%   34/s ] write: [  12M 131k/s ] 2G
    cpus: 11 gc:  3% heap: 3.4G/6.4G direct: 54M postGC: 1.1G
    ->     (0/3) -> read( 0%) ->   (1k/1k) -> process(96% 95% 97% 96% 95% 97% 96% 95% 95% 96% 97%) -> (7.3k/62k) -> write( 2%)
0:01:40 INF [water_polygons] -  read: [ 3.5k  70%   39/s ] write: [  14M 158k/s ] 2G
    cpus: 11 gc:  4% heap: 1.7G/6.4G direct: 54M postGC: 1.3G
    ->     (0/3) -> read( 0%) ->   (1k/1k) -> process(96% 96% 96% 96% 96% 96% 96% 96% 96% 96% 96%) -> (18k/62k) -> write( 3%)
0:01:50 INF [water_polygons] -  read: [   5k 100%  148/s ] write: [  46M 3.1M/s ] 4G
    cpus: 6.6 gc:  2% heap: 2.3G/6.4G direct: 54M postGC: 1.2G
    ->     (0/3) -> read( -%) ->    (0/1k) -> process( -%  -%  -%  -%  -%  -% 95%  -%  -%  -%  -%) -> (5.3k/62k) -> write(43%)
0:01:54 INF [water_polygons] -  read: [   5k 100%    0/s ] write: [  46M 1.3k/s ] 2.4G
    cpus: 1 gc:  0% heap: 1.5G/6.4G direct: 54M postGC: 1.2G
    ->     (0/3) -> read( -%) ->    (0/1k) -> process( -%  -%  -%  -%  -%  -%  -%  -%  -%  -%  -%) ->   (0/62k) -> write( -%)
0:01:54 INF [water_polygons] - Finished in 1m54s cpu:18m13s gc:5s avg:9.6
0:01:54 INF [water_polygons] -   read     1x(12% 14s wait:1m17s done:21s)
0:01:54 INF [water_polygons] -   process 11x(78% 1m29s done:17s)
0:01:54 INF [water_polygons] -   write    1x(6% 7s wait:1m37s done:9s)
0:01:54 INF [natural_earth] -
0:01:54 INF [natural_earth] - Starting...
0:02:03 INF [natural_earth] -  read: [ 349k 100%  41k/s ] write: [  18k 2.1k/s ] 2.4G
    cpus: 2.7 gc:  0% heap: 3.4G/6.4G direct: 54M postGC: 1.2G
    ->     (0/3) -> read( -%) ->    (0/1k) -> process( -%  -%  -%  -%  -%  -%  -%  -%  -%  -%  -%) ->   (0/62k) -> write( -%)
0:02:03 INF [natural_earth] - Finished in 9s cpu:24s avg:2.6
0:02:03 INF [natural_earth] -   read     1x(88% 8s)
0:02:03 INF [natural_earth] -   process 11x(6% 0.5s wait:8s)
0:02:03 INF [natural_earth] -   write    1x(0% 0s wait:8s)
0:02:03 INF [osm_pass1] -
0:02:03 INF [osm_pass1] - Starting...
0:02:13 INF [osm_pass1] -  nodes: [  86M 8.6M/s ] 1G    ways: [    0    0/s ] rels: [    0    0/s ] blocks: [  10k   1k/s ]
    cpus: 4.7 gc:  1% heap: 3.6G/6.4G direct: 54M postGC: 1.2G hppc:  752
    read( 5%) ->    (1/31) -> parse(29% 32% 34% 38% 32% 35% 37% 35% 33% 34%) ->    (6/22) -> process(76%)
0:02:23 INF [osm_pass1] -  nodes: [ 117M 3.1M/s ] 1.3G  ways: [    0    0/s ] rels: [    0    0/s ] blocks: [  14k  390/s ]
    cpus: 2.3 gc:  0% heap: 3.6G/6.4G direct: 54M postGC: 1.2G hppc:  752
    read( 4%) ->    (0/31) -> parse(19% 19% 18% 19% 19% 20% 19% 19% 18% 18%) ->    (9/22) -> process(33%)
0:02:33 INF [osm_pass1] -  nodes: [ 149M 3.1M/s ] 1.8G  ways: [    0    0/s ] rels: [    0    0/s ] blocks: [  18k  398/s ]
    cpus: 2.2 gc:  0% heap: 4.3G/6.4G direct: 54M postGC: 1.2G hppc:  752
    read( 3%) ->    (0/31) -> parse(18% 18% 19% 18% 16% 18% 17% 19% 17% 19%) ->   (21/22) -> process(40%)
0:02:43 INF [osm_pass1] -  nodes: [ 193M 4.3M/s ] 2.5G  ways: [    0    0/s ] rels: [    0    0/s ] blocks: [  24k  548/s ]
    cpus: 2.3 gc:  0% heap: 2.4G/6.4G direct: 54M postGC: 1.3G hppc:  752
    read( 5%) ->    (0/31) -> parse(18% 17% 17% 17% 19% 18% 15% 20% 18% 17%) ->   (21/22) -> process(47%)
0:02:53 INF [osm_pass1] -  nodes: [ 231M 3.8M/s ] 3.1G  ways: [    0    0/s ] rels: [    0    0/s ] blocks: [  28k  468/s ]
    cpus: 2 gc:  0% heap: 1.7G/6.4G direct: 54M postGC: 1.3G hppc:  752
    read( 3%) ->    (0/31) -> parse(15% 15% 15% 14% 18% 15% 17% 14% 14% 15%) ->   (19/22) -> process(42%)
0:03:03 INF [osm_pass1] -  nodes: [ 261M 2.9M/s ] 3.8G  ways: [    0    0/s ] rels: [    0    0/s ] blocks: [  32k  371/s ]
    cpus: 1.8 gc:  0% heap: 4.1G/6.4G direct: 54M postGC: 1.3G hppc:  752
    read( 3%) ->    (0/31) -> parse(13% 14% 13% 16% 14% 15% 14% 13% 13% 14%) ->   (21/22) -> process(43%)
0:03:14 INF [osm_pass1] -  nodes: [ 292M 3.1M/s ] 4.5G  ways: [    0    0/s ] rels: [    0    0/s ] blocks: [  36k  383/s ]
    cpus: 2.1 gc:  0% heap: 4G/6.4G direct: 54M postGC: 1.3G hppc:  752
    read( 5%) ->    (0/31) -> parse(14% 15% 15% 16% 15% 16% 16% 17% 16% 15%) ->    (7/22) -> process(44%)
0:03:24 INF [osm_pass1] -  nodes: [ 328M 3.5M/s ] 5.4G  ways: [    0    0/s ] rels: [    0    0/s ] blocks: [  41k  444/s ]
    cpus: 2.1 gc:  0% heap: 2.3G/6.4G direct: 54M postGC: 1.3G hppc:  752
    read( 4%) ->    (0/31) -> parse(17% 18% 16% 16% 14% 15% 13% 15% 14% 16%) ->   (21/22) -> process(49%)
0:03:34 INF [osm_pass1] -  nodes: [ 376M 4.8M/s ] 6G    ways: [    0    0/s ] rels: [    0    0/s ] blocks: [  47k  605/s ]
    cpus: 2.5 gc:  0% heap: 2.3G/6.4G direct: 54M postGC: 1.3G hppc:  752
    read( 4%) ->    (0/31) -> parse(20% 18% 21% 18% 17% 21% 18% 18% 20% 20%) ->    (8/22) -> process(54%)
0:03:44 INF [osm_pass1] -  nodes: [ 427M   5M/s ] 6.7G  ways: [    0    0/s ] rels: [    0    0/s ] blocks: [  53k  617/s ]
    cpus: 2.3 gc:  0% heap: 3.4G/6.4G direct: 54M postGC: 1.4G hppc:  752
    read( 5%) ->    (0/31) -> parse(16% 17% 18% 18% 16% 19% 18% 18% 15% 17%) ->   (16/22) -> process(48%)
0:03:54 INF [osm_pass1] -  nodes: [ 486M 5.8M/s ] 7.4G  ways: [    0    0/s ] rels: [    0    0/s ] blocks: [  60k  739/s ]
    cpus: 2.5 gc:  1% heap: 1.5G/6.4G direct: 54M postGC: 1.4G hppc:  752
    read( 6%) ->    (0/31) -> parse(19% 23% 18% 19% 18% 20% 19% 20% 17% 17%) ->   (21/22) -> process(51%)
0:04:04 INF [osm_pass1] -  nodes: [ 551M 6.5M/s ] 8G    ways: [    0    0/s ] rels: [    0    0/s ] blocks: [  68k  799/s ]
    cpus: 2.7 gc:  1% heap: 1.6G/6.4G direct: 54M postGC: 1.4G hppc:  752
    read( 4%) ->    (0/31) -> parse(22% 20% 22% 18% 21% 20% 21% 20% 18% 20%) ->   (19/22) -> process(54%)
0:04:14 INF [osm_pass1] -  nodes: [ 610M 5.7M/s ] 8.7G  ways: [    0    0/s ] rels: [    0    0/s ] blocks: [  76k  735/s ]
    cpus: 2.5 gc:  0% heap: 2.5G/6.4G direct: 54M postGC: 1.5G hppc:  752
    read( 4%) ->    (0/31) -> parse(18% 19% 18% 21% 20% 17% 19% 20% 20% 17%) ->   (21/22) -> process(54%)
0:04:24 INF [osm_pass1] -  nodes: [ 671M   6M/s ] 9.4G  ways: [    0    0/s ] rels: [    0    0/s ] blocks: [  83k  757/s ]
    cpus: 2.6 gc:  0% heap: 4.3G/6.4G direct: 54M postGC: 1.5G hppc:  752
    read( 4%) ->    (0/31) -> parse(19% 22% 20% 20% 20% 20% 19% 18% 22% 21%) ->   (21/22) -> process(55%)
0:04:34 INF [osm_pass1] -  nodes: [ 728M 5.6M/s ] 10G   ways: [    0    0/s ] rels: [    0    0/s ] blocks: [  91k  695/s ]
    cpus: 2.5 gc:  1% heap: 4.4G/6.4G direct: 54M postGC: 1.5G hppc:  752
    read( 5%) ->    (0/31) -> parse(22% 17% 18% 21% 20% 18% 19% 21% 19% 15%) ->   (20/22) -> process(50%)
0:04:44 INF [osm_pass1] -  nodes: [ 782M 5.3M/s ] 10G   ways: [    0    0/s ] rels: [    0    0/s ] blocks: [  97k  677/s ]
    cpus: 2.5 gc:  1% heap: 3.9G/6.4G direct: 54M postGC: 1.6G hppc:  752
    read( 4%) ->    (0/31) -> parse(19% 21% 16% 20% 20% 18% 20% 17% 21% 17%) ->   (21/22) -> process(52%)
0:04:55 INF [osm_pass1] -  nodes: [ 840M 5.7M/s ] 11G   ways: [    0    0/s ] rels: [    0    0/s ] blocks: [ 105k  707/s ]
    cpus: 2.5 gc:  1% heap: 4.2G/6.4G direct: 54M postGC: 1.6G hppc:  752
    read( 6%) ->    (0/31) -> parse(19% 19% 18% 15% 21% 21% 18% 22% 19% 19%) ->   (18/22) -> process(52%)
0:05:05 INF [osm_pass1] -  nodes: [ 892M 5.1M/s ] 12G   ways: [    0    0/s ] rels: [    0    0/s ] blocks: [ 111k  657/s ]
    cpus: 2.3 gc:  0% heap: 2.5G/6.4G direct: 54M postGC: 1.6G hppc:  752
    read( 4%) ->    (0/31) -> parse(17% 19% 17% 16% 20% 15% 17% 18% 19% 17%) ->   (19/22) -> process(51%)
0:05:15 INF [osm_pass1] -  nodes: [ 947M 5.4M/s ] 12G   ways: [    0    0/s ] rels: [    0    0/s ] blocks: [ 118k  680/s ]
    cpus: 2.6 gc:  0% heap: 2.6G/6.4G direct: 54M postGC: 1.6G hppc:  752
    read( 4%) ->    (0/31) -> parse(18% 17% 20% 21% 22% 20% 23% 17% 19% 21%) ->   (20/22) -> process(53%)
0:05:25 INF [osm_pass1] -  nodes: [ 993M 4.5M/s ] 13G   ways: [    0    0/s ] rels: [    0    0/s ] blocks: [ 124k  568/s ]
    cpus: 2.3 gc:  0% heap: 3.2G/6.4G direct: 54M postGC: 1.7G hppc:  752
    read( 5%) ->    (0/31) -> parse(19% 20% 19% 17% 17% 17% 17% 16% 20% 20%) ->   (10/22) -> process(42%)
0:05:35 INF [osm_pass1] -  nodes: [   1B 5.9M/s ] 13G   ways: [    0    0/s ] rels: [    0    0/s ] blocks: [ 131k  749/s ]
    cpus: 2.6 gc:  0% heap: 4.4G/6.4G direct: 54M postGC: 1.7G hppc:  752
    read( 5%) ->    (0/31) -> parse(18% 20% 20% 21% 22% 22% 20% 18% 19% 21%) ->    (4/22) -> process(51%)
0:05:45 INF [osm_pass1] -  nodes: [ 1.1B   6M/s ] 14G   ways: [    0    0/s ] rels: [    0    0/s ] blocks: [ 139k  739/s ]
    cpus: 2.6 gc:  1% heap: 3.1G/6.4G direct: 54M postGC: 1.7G hppc:  752
    read( 4%) ->    (0/31) -> parse(22% 21% 21% 19% 19% 19% 23% 19% 20% 20%) ->   (18/22) -> process(53%)
0:05:55 INF [osm_pass1] -  nodes: [ 1.1B 5.8M/s ] 15G   ways: [    0    0/s ] rels: [    0    0/s ] blocks: [ 146k  751/s ]
    cpus: 2.6 gc:  0% heap: 4.3G/6.4G direct: 54M postGC: 1.7G hppc:  752
    read( 5%) ->    (0/31) -> parse(19% 23% 20% 19% 21% 22% 20% 17% 19% 20%) ->   (11/22) -> process(50%)
0:06:05 INF [osm_pass1] -  nodes: [ 1.2B   7M/s ] 15G   ways: [    0    0/s ] rels: [    0    0/s ] blocks: [ 155k  879/s ]
    cpus: 2.9 gc:  1% heap: 3.4G/6.4G direct: 54M postGC: 1.7G hppc:  752
    read( 6%) ->    (0/31) -> parse(23% 22% 24% 22% 22% 21% 20% 24% 22% 23%) ->   (21/22) -> process(59%)
0:06:15 INF [osm_pass1] -  nodes: [ 1.3B   7M/s ] 16G   ways: [    0    0/s ] rels: [    0    0/s ] blocks: [ 164k  863/s ]
    cpus: 2.7 gc:  1% heap: 2.4G/6.4G direct: 54M postGC: 1.8G hppc:  752
    read( 6%) ->    (0/31) -> parse(17% 22% 23% 23% 20% 21% 22% 19% 21% 22%) ->   (20/22) -> process(55%)
0:06:23 INF [osm_pass1:process] - Finished nodes: 1,367,130,624 (5.2M/s) in 4m20s cpu:10m55s gc:1s avg:2.5
0:06:25 INF [osm_pass1] -  nodes: [ 1.3B 5.1M/s ] 16G   ways: [ 1.6M 160k/s ] rels: [    0    0/s ] blocks: [ 171k  669/s ]
    cpus: 3 gc:  1% heap: 4.2G/6.4G direct: 54M postGC: 1.8G hppc:  752
    read( 5%) ->    (4/31) -> parse(24% 21% 24% 22% 23% 22% 20% 23% 21% 22%) ->   (17/22) -> process(46%)
0:06:35 INF [osm_pass1] -  nodes: [ 1.3B    0/s ] 16G   ways: [  11M 983k/s ] rels: [    0    0/s ] blocks: [ 172k  122/s ]
    cpus: 2.7 gc:  1% heap: 3.1G/6.4G direct: 54M postGC: 1.8G hppc:  752
    read( 2%) ->   (11/31) -> parse(24% 25% 23% 25% 23% 25% 23% 24% 24% 23%) ->   (21/22) -> process(24%)
0:06:46 INF [osm_pass1] -  nodes: [ 1.3B    0/s ] 16G   ways: [  25M 1.3M/s ] rels: [    0    0/s ] blocks: [ 174k  169/s ]
    cpus: 3.2 gc:  1% heap: 3.6G/6.4G direct: 54M postGC: 1.8G hppc:  752
    read( 1%) ->    (7/31) -> parse(30% 30% 28% 30% 30% 30% 30% 27% 27% 30%) ->   (17/22) -> process(24%)
0:06:56 INF [osm_pass1] -  nodes: [ 1.3B    0/s ] 16G   ways: [  46M 2.1M/s ] rels: [    0    0/s ] blocks: [ 176k  265/s ]
    cpus: 4.6 gc:  1% heap: 2.6G/6.4G direct: 54M postGC: 1.8G hppc:  752
    read( 4%) ->    (4/31) -> parse(43% 41% 43% 43% 39% 41% 42% 42% 41% 43%) ->   (20/22) -> process(36%)
0:07:06 INF [osm_pass1] -  nodes: [ 1.3B    0/s ] 16G   ways: [  73M 2.6M/s ] rels: [    0    0/s ] blocks: [ 180k  336/s ]
    cpus: 5.1 gc:  1% heap: 1.8G/6.4G direct: 54M postGC: 1.8G hppc:  752
    read( 3%) ->    (9/31) -> parse(45% 47% 47% 45% 44% 45% 45% 47% 44% 45%) ->   (21/22) -> process(43%)
0:07:16 INF [osm_pass1] -  nodes: [ 1.3B    0/s ] 16G   ways: [ 103M 2.9M/s ] rels: [    0    0/s ] blocks: [ 183k  371/s ]
    cpus: 5.6 gc:  1% heap: 4.7G/6.4G direct: 54M postGC: 1.8G hppc:  752
    read( 5%) ->    (1/31) -> parse(50% 49% 49% 49% 49% 50% 49% 48% 50% 47%) ->   (13/22) -> process(47%)
0:07:26 INF [osm_pass1] -  nodes: [ 1.3B    0/s ] 16G   ways: [ 131M 2.7M/s ] rels: [    0    0/s ] blocks: [ 187k  339/s ]
    cpus: 5.1 gc:  1% heap: 2.7G/6.4G direct: 54M postGC: 1.8G hppc:  752
    read( 3%) ->    (7/31) -> parse(47% 45% 46% 46% 45% 45% 45% 46% 46% 46%) ->   (21/22) -> process(42%)
0:07:28 INF [osm_pass1:process] - Finished ways: 135,105,294 (2M/s) in 1m5s cpu:4m41s avg:4.3
0:07:30 INF [osm_pass1:process] - Finished relations: 1,276,782 (615k/s) in 2s cpu:9s avg:4.3
0:07:30 INF [osm_pass1] -  nodes: [ 1.3B    0/s ] 16G   ways: [ 135M   1M/s ] rels: [ 1.2M 318k/s ] blocks: [ 187k  167/s ]
    cpus: 4.9 gc:  2% heap: 4G/6.4G direct: 54M postGC: 2G hppc:  77M
    read( -%) ->    (0/31) -> parse( -%  -%  -%  -%  -%  -%  -%  -%  -%  -%) ->    (0/22) -> process( -%)
0:07:30 DEB [osm_pass1] - Processed 187,941 blocks:
0:07:30 DEB [osm_pass1] -   nodes: 1,367,130,624 (5.2M/s) in 4m20s cpu:10m55s gc:1s avg:2.5
0:07:30 DEB [osm_pass1] -   ways: 135,105,294 (2M/s) in 1m5s cpu:4m41s avg:4.3
0:07:30 DEB [osm_pass1] -   relations: 1,276,782 (615k/s) in 2s cpu:9s avg:4.3
0:07:30 INF [osm_pass1] - Finished in 5m27s cpu:15m46s gc:2s avg:2.9
0:07:30 INF [osm_pass1] -   read     1x(4% 14s wait:1m12s)
0:07:30 INF [osm_pass1] -   parse   10x(23% 1m15s block:26s wait:2m48s)
0:07:30 INF [osm_pass1] -   process  1x(48% 2m36s wait:2m39s)
0:07:30 INF [osm_pass2] -
0:07:30 INF [osm_pass2] - Starting...
0:07:40 INF [osm_pass2] -  nodes: [  30M   2%   3M/s ] 16G   ways: [    0   0%    0/s ] rels: [    0   0%    0/s ] features: [  46M 8.7k/s ] 4.4G  blocks: [ 3.8k   2%  379/s ]
    cpus: 4.9 gc:  2% heap: 2.3G/6.4G direct: 54M postGC: 1.9G relInfo:  77M mpGeoms: 277
    read( 4%) ->    (9/22) -> process(43% 40% 38% 40% 40% 42% 42% 40% 41% 40% 41%) -> (5.5k/62k) -> write( 0%)
0:07:50 INF [osm_pass2] -  nodes: [  67M   5% 3.7M/s ] 16G   ways: [    0   0%    0/s ] rels: [    0   0%    0/s ] features: [  46M 7.3k/s ] 4.4G  blocks: [ 8.4k   5%  467/s ]
    cpus: 5.5 gc:  0% heap: 3G/6.4G direct: 54M postGC: 1.9G relInfo:  77M mpGeoms: 277
    read( 7%) ->   (11/22) -> process(49% 49% 46% 49% 50% 48% 49% 50% 47% 49% 51%) -> (5.5k/62k) -> write( 0%)
0:08:00 INF [osm_pass2] -  nodes: [ 102M   7% 3.4M/s ] 16G   ways: [    0   0%    0/s ] rels: [    0   0%    0/s ] features: [  52M 569k/s ] 4.4G  blocks: [  12k   7%  425/s ]
    cpus: 6.5 gc:  1% heap: 4.6G/6.4G direct: 54M postGC: 1.9G relInfo:  77M mpGeoms: 277
    read( 6%) ->    (4/22) -> process(53% 51% 55% 55% 53% 53% 54% 51% 53% 57% 54%) -> (6.2k/62k) -> write(16%)
0:08:10 INF [osm_pass2] -  nodes: [ 140M  10% 3.8M/s ] 16G   ways: [    0   0%    0/s ] rels: [    0   0%    0/s ] features: [  52M 7.9k/s ] 4.4G  blocks: [  17k   9%  478/s ]
    cpus: 5.6 gc:  0% heap: 3.6G/6.4G direct: 54M postGC: 1.9G relInfo:  77M mpGeoms: 277
    read( 7%) ->    (9/22) -> process(52% 50% 49% 53% 50% 50% 50% 50% 48% 51% 51%) -> (5.6k/62k) -> write( 0%)
0:08:20 INF [osm_pass2] -  nodes: [ 180M  13% 3.9M/s ] 16G   ways: [    0   0%    0/s ] rels: [    0   0%    0/s ] features: [  53M 112k/s ] 4.4G  blocks: [  22k  12%  496/s ]
    cpus: 6.6 gc:  1% heap: 4.2G/6.4G direct: 54M postGC: 1.9G relInfo:  77M mpGeoms: 277
    read( 8%) ->    (3/22) -> process(56% 57% 55% 58% 55% 55% 55% 57% 55% 56% 55%) -> (5.2k/62k) -> write( 4%)
0:08:30 INF [osm_pass2] -  nodes: [ 225M  16% 4.4M/s ] 16G   ways: [    0   0%    0/s ] rels: [    0   0%    0/s ] features: [  54M  79k/s ] 4.4G  blocks: [  28k  15%  554/s ]
    cpus: 6.5 gc:  1% heap: 2.9G/6.4G direct: 54M postGC: 1.9G relInfo:  77M mpGeoms: 277
    read( 9%) ->    (3/22) -> process(59% 58% 56% 57% 56% 57% 59% 59% 55% 60% 58%) -> (4.7k/62k) -> write( 2%)
0:08:40 INF [osm_pass2] -  nodes: [ 266M  19% 4.1M/s ] 16G   ways: [    0   0%    0/s ] rels: [    0   0%    0/s ] features: [  54M  16k/s ] 4.4G  blocks: [  33k  18%  517/s ]
    cpus: 6 gc:  0% heap: 2.3G/6.4G direct: 54M postGC: 1.9G relInfo:  77M mpGeoms: 277
    read( 8%) ->    (4/22) -> process(55% 56% 52% 54% 52% 51% 54% 54% 53% 55% 54%) -> (6.2k/62k) -> write( 1%)
0:08:50 INF [osm_pass2] -  nodes: [ 325M  24% 5.8M/s ] 16G   ways: [    0   0%    0/s ] rels: [    0   0%    0/s ] features: [  54M  52k/s ] 4.4G  blocks: [  40k  22%  729/s ]
    cpus: 9.1 gc:  1% heap: 3G/6.4G direct: 54M postGC: 1.9G relInfo:  77M mpGeoms: 277
    read(11%) ->   (11/22) -> process(77% 76% 77% 80% 77% 78% 78% 77% 78% 79% 77%) -> (5.4k/62k) -> write( 2%)
0:09:00 INF [osm_pass2] -  nodes: [ 398M  29% 7.2M/s ] 16G   ways: [    0   0%    0/s ] rels: [    0   0%    0/s ] features: [  55M  72k/s ] 4.4G  blocks: [  49k  26%  903/s ]
    cpus: 11.2 gc:  1% heap: 2G/6.4G direct: 54M postGC: 873M relInfo:  77M mpGeoms: 277
    read(20%) ->    (9/22) -> process(95% 95% 97% 97% 96% 96% 96% 96% 95% 96% 97%) -> (6.7k/62k) -> write( 2%)
0:09:10 INF [osm_pass2] -  nodes: [ 468M  34%   7M/s ] 16G   ways: [    0   0%    0/s ] rels: [    0   0%    0/s ] features: [  56M 114k/s ] 4.4G  blocks: [  58k  31%  883/s ]
    cpus: 11.3 gc:  1% heap: 3G/6.4G direct: 54M postGC: 890M relInfo:  77M mpGeoms: 277
    read(16%) ->   (11/22) -> process(96% 95% 95% 96% 96% 95% 97% 95% 96% 96% 96%) -> (3.8k/62k) -> write( 4%)
0:09:20 INF [osm_pass2] -  nodes: [ 543M  40% 7.4M/s ] 16G   ways: [    0   0%    0/s ] rels: [    0   0%    0/s ] features: [  57M  62k/s ] 4.4G  blocks: [  67k  36%  935/s ]
    cpus: 11.1 gc:  1% heap: 2.1G/6.4G direct: 54M postGC: 890M relInfo:  77M mpGeoms: 277
    read(15%) ->   (11/22) -> process(98% 100% 99% 99% 99% 98% 99% 99% 99% 99% 99%) -> (6.6k/62k) -> write( 2%)
0:09:30 INF [osm_pass2] -  nodes: [ 618M  45% 7.4M/s ] 16G   ways: [    0   0%    0/s ] rels: [    0   0%    0/s ] features: [  57M  56k/s ] 4.4G  blocks: [  77k  41%  931/s ]
    cpus: 11.1 gc:  1% heap: 877M/6.4G direct: 54M postGC: 891M relInfo:  77M mpGeoms: 277
    read(20%) ->    (9/22) -> process(98% 98% 99% 99% 98% 99% 98% 99% 98% 99% 99%) -> (4.9k/62k) -> write( 2%)
0:09:40 INF [osm_pass2] -  nodes: [ 690M  51% 7.1M/s ] 16G   ways: [    0   0%    0/s ] rels: [    0   0%    0/s ] features: [  59M 145k/s ] 4.4G  blocks: [  86k  46%  899/s ]
    cpus: 11.2 gc:  1% heap: 1.6G/6.4G direct: 54M postGC: 894M relInfo:  77M mpGeoms: 277
    read(19%) ->   (11/22) -> process(97% 96% 98% 98% 97% 98% 97% 98% 97% 98% 98%) ->  (7k/62k) -> write( 4%)
0:09:50 INF [osm_pass2] -  nodes: [ 761M  56% 7.1M/s ] 16G   ways: [    0   0%    0/s ] rels: [    0   0%    0/s ] features: [  60M 131k/s ] 5.7G  blocks: [  95k  51%  892/s ]
    cpus: 11 gc:  1% heap: 1.3G/6.4G direct: 54M postGC: 893M relInfo:  77M mpGeoms: 277
    read(16%) ->   (10/22) -> process(98% 97% 97% 98% 98% 97% 97% 97% 96% 97% 97%) -> (5.4k/62k) -> write( 5%)
0:10:00 INF [osm_pass2] -  nodes: [ 833M  61% 7.1M/s ] 16G   ways: [    0   0%    0/s ] rels: [    0   0%    0/s ] features: [  62M 159k/s ] 5.7G  blocks: [ 104k  55%  889/s ]
    cpus: 10.9 gc:  1% heap: 1.2G/6.4G direct: 54M postGC: 894M relInfo:  77M mpGeoms: 277
    read(18%) ->   (11/22) -> process(96% 96% 96% 96% 97% 96% 96% 95% 97% 97% 97%) -> (5.6k/62k) -> write( 4%)
0:10:10 INF [osm_pass2] -  nodes: [ 905M  66% 7.2M/s ] 16G   ways: [    0   0%    0/s ] rels: [    0   0%    0/s ] features: [  63M  91k/s ] 5.7G  blocks: [ 113k  60%  900/s ]
    cpus: 10.9 gc:  1% heap: 3.3G/6.4G direct: 54M postGC: 896M relInfo:  77M mpGeoms: 277
    read(16%) ->   (10/22) -> process(96% 96% 95% 96% 97% 97% 96% 94% 97% 96% 96%) -> (5.9k/62k) -> write( 3%)
0:10:20 INF [osm_pass2] -  nodes: [ 970M  71% 6.4M/s ] 16G   ways: [    0   0%    0/s ] rels: [    0   0%    0/s ] features: [  68M 479k/s ] 5.7G  blocks: [ 121k  65%  804/s ]
    cpus: 11.2 gc:  1% heap: 2.3G/6.4G direct: 54M postGC: 899M relInfo:  77M mpGeoms: 277
    read(15%) ->   (11/22) -> process(97% 98% 99% 99% 99% 98% 98% 98% 98% 99% 99%) -> (6.2k/62k) -> write(13%)
0:10:30 INF [osm_pass2] -  nodes: [   1B  76% 7.3M/s ] 16G   ways: [    0   0%    0/s ] rels: [    0   0%    0/s ] features: [  69M  96k/s ] 5.7G  blocks: [ 130k  69%  918/s ]
    cpus: 11.1 gc:  1% heap: 871M/6.4G direct: 54M postGC: 899M relInfo:  77M mpGeoms: 277
    read(18%) ->   (11/22) -> process(98% 98% 99% 98% 98% 98% 97% 98% 98% 98% 99%) -> (6.5k/62k) -> write( 3%)
0:10:40 INF [osm_pass2] -  nodes: [ 1.1B  82% 7.3M/s ] 16G   ways: [    0   0%    0/s ] rels: [    0   0%    0/s ] features: [  69M  92k/s ] 5.7G  blocks: [ 139k  74%  917/s ]
    cpus: 11.1 gc:  1% heap: 2.3G/6.4G direct: 54M postGC: 899M relInfo:  77M mpGeoms: 277
    read(18%) ->   (11/22) -> process(98% 98% 98% 98% 99% 98% 98% 97% 99% 98% 98%) -> (4.4k/62k) -> write( 2%)
0:10:50 INF [osm_pass2] -  nodes: [ 1.1B  87% 7.4M/s ] 16G   ways: [    0   0%    0/s ] rels: [    0   0%    0/s ] features: [  70M  66k/s ] 5.7G  blocks: [ 149k  79%  932/s ]
    cpus: 11 gc:  1% heap: 1.2G/6.4G direct: 54M postGC: 898M relInfo:  77M mpGeoms: 277
    read(20%) ->   (11/22) -> process(98% 97% 97% 98% 98% 98% 97% 97% 98% 98% 98%) -> (5.7k/62k) -> write( 3%)
0:11:00 INF [osm_pass2] -  nodes: [ 1.2B  93% 7.3M/s ] 16G   ways: [    0   0%    0/s ] rels: [    0   0%    0/s ] features: [  71M 104k/s ] 5.7G  blocks: [ 158k  84%  916/s ]
    cpus: 11.1 gc:  1% heap: 2.6G/6.4G direct: 54M postGC: 900M relInfo:  77M mpGeoms: 277
    read(17%) ->   (11/22) -> process(98% 98% 97% 99% 98% 98% 97% 98% 99% 98% 98%) -> (4.9k/62k) -> write( 3%)
0:11:10 INF [osm_pass2] -  nodes: [ 1.3B  98% 7.3M/s ] 16G   ways: [    0   0%    0/s ] rels: [    0   0%    0/s ] features: [  72M  89k/s ] 5.7G  blocks: [ 167k  89%  919/s ]
    cpus: 11.2 gc:  1% heap: 1.1G/6.4G direct: 54M postGC: 900M relInfo:  77M mpGeoms: 277
    read(19%) ->   (11/22) -> process(97% 97% 98% 99% 98% 98% 99% 99% 98% 98% 98%) ->  (7k/62k) -> write( 3%)
0:11:14 DEB [osm_pass2:process] - Sorting long long multimap...
0:11:14 INF [osm_pass2:process] - Finished nodes: 1,367,130,624 (6M/s) in 3m45s cpu:35m13s gc:2s avg:9.4
0:11:15 DEB [osm_pass2:process] - Sorted long long multimap 0.7s cpu:4s avg:6
0:11:15 INF [osm_pass2:process] - madvise not available on this system - node location lookup may be slower when less free RAM is available outside the JVM
0:11:21 INF [osm_pass2] -  nodes: [ 1.3B 100% 2.7M/s ] 16G   ways: [ 9.9k   0%  957/s ] rels: [    0   0%    0/s ] features: [  73M  59k/s ] 5.7G  blocks: [ 170k  91%  334/s ]
    cpus: 5 gc:  0% heap: 3.5G/6.4G direct: 54M postGC: 900M relInfo:  69M mpGeoms: 156k
    read( 8%) ->   (11/22) -> process(36% 38% 36% 36% 38% 37% 42% 36% 36% 36% 36%) -> (4.9k/62k) -> write( 2%)
0:11:32 INF [osm_pass2] -  nodes: [ 1.3B 100%    0/s ] 16G   ways: [  42k   0%   3k/s ] rels: [    0   0%    0/s ] features: [  73M  16k/s ] 5.7G  blocks: [ 170k  91%   <1/s ]
    cpus: 0.4 gc:  1% heap: 2G/6.4G direct: 54M postGC: 964M relInfo:  69M mpGeoms: 412k
    read( 0%) ->   (11/22) -> process( 1%  4%  3%  4%  4%  3%  1%  5%  0%  1%  2%) -> (5.1k/62k) -> write( 1%)
0:11:42 INF [osm_pass2] -  nodes: [ 1.3B 100%    0/s ] 16G   ways: [  73k   0% 2.9k/s ] rels: [    0   0%    0/s ] features: [  73M  15k/s ] 5.7G  blocks: [ 170k  91%   <1/s ]
    cpus: 0.3 gc:  0% heap: 3.2G/6.4G direct: 54M postGC: 964M relInfo:  69M mpGeoms: 674k
    read( 0%) ->   (11/22) -> process( 2%  1%  5%  6%  3%  2%  0%  1%  2%  2%  3%) ->  (6k/62k) -> write( 1%)
0:11:53 INF [osm_pass2] -  nodes: [ 1.3B 100%    0/s ] 16G   ways: [ 100k   0% 2.6k/s ] rels: [    0   0%    0/s ] features: [  73M  14k/s ] 5.7G  blocks: [ 170k  91%   <1/s ]
    cpus: 0.3 gc:  0% heap: 1.5G/6.4G direct: 54M postGC: 963M relInfo:  69M mpGeoms: 724k
    read( 0%) ->   (11/22) -> process( 0%  2%  5%  5%  2%  2%  1%  1%  3%  1%  2%) -> (4.9k/62k) -> write( 0%)
0:12:04 INF [osm_pass2] -  nodes: [ 1.3B 100%    0/s ] 16G   ways: [ 124k   0% 2.2k/s ] rels: [    0   0%    0/s ] features: [  73M  13k/s ] 5.7G  blocks: [ 170k  91%   <1/s ]
    cpus: 0.2 gc:  0% heap: 2.4G/6.4G direct: 54M postGC: 963M relInfo:  69M mpGeoms: 724k
    read( 0%) ->   (11/22) -> process( 1%  2%  3%  1%  2%  4%  1%  2%  2%  0%  3%) -> (3.9k/62k) -> write( 1%)
0:12:14 INF [osm_pass2] -  nodes: [ 1.3B 100%    0/s ] 16G   ways: [ 145k   0% 1.9k/s ] rels: [    0   0%    0/s ] features: [  73M  12k/s ] 5.7G  blocks: [ 170k  91%   <1/s ]
    cpus: 0.3 gc:  0% heap: 3.3G/6.4G direct: 54M postGC: 963M relInfo:  69M mpGeoms: 774k
    read( 0%) ->   (11/22) -> process( 1%  1%  2%  1%  4%  4%  1%  4%  2%  1%  3%) -> (5.1k/62k) -> write( 1%)
0:12:25 INF [osm_pass2] -  nodes: [ 1.3B 100%    0/s ] 16G   ways: [ 170k   0% 2.3k/s ] rels: [    0   0%    0/s ] features: [  74M  14k/s ] 5.7G  blocks: [ 170k  91%   <1/s ]
    cpus: 0.3 gc:  1% heap: 1.7G/6.4G direct: 54M postGC: 966M relInfo:  69M mpGeoms: 774k
    read( 0%) ->   (11/22) -> process( 1%  2%  2%  1%  3%  5%  4%  3%  3%  1%  2%) -> (6.5k/62k) -> write( 0%)
0:12:35 INF [osm_pass2] -  nodes: [ 1.3B 100%    0/s ] 16G   ways: [ 198k   0% 2.6k/s ] rels: [    0   0%    0/s ] features: [  74M  15k/s ] 5.7G  blocks: [ 170k  91%   <1/s ]
    cpus: 0.3 gc:  0% heap: 2.8G/6.4G direct: 54M postGC: 966M relInfo:  69M mpGeoms: 824k
    read( 0%) ->   (11/22) -> process( 0%  4%  2%  2%  3%  2%  2%  3%  4%  2%  1%) -> (3.9k/62k) -> write( 1%)
0:12:46 INF [osm_pass2] -  nodes: [ 1.3B 100%    0/s ] 16G   ways: [ 219k   0%   2k/s ] rels: [    0   0%    0/s ] features: [  74M  13k/s ] 5.7G  blocks: [ 170k  91%   <1/s ]
    cpus: 0.2 gc:  0% heap: 1G/6.4G direct: 54M postGC: 960M relInfo:  69M mpGeoms: 824k
    read( 0%) ->   (11/22) -> process( 0%  2%  2%  3%  1%  3%  1%  1%  4%  1%  2%) -> (4.5k/62k) -> write( 1%)
0:12:56 INF [osm_pass2] -  nodes: [ 1.3B 100%    0/s ] 16G   ways: [ 235k   0% 1.4k/s ] rels: [    0   0%    0/s ] features: [  74M 9.1k/s ] 5.7G  blocks: [ 170k  91%    0/s ]
    cpus: 0.2 gc:  0% heap: 1.6G/6.4G direct: 54M postGC: 960M relInfo:  69M mpGeoms: 824k
    read( 0%) ->   (11/22) -> process( 0%  1%  2%  0%  4%  2%  1%  1%  1%  2%  2%) -> (5.8k/62k) -> write( 1%)
0:13:07 INF [osm_pass2] -  nodes: [ 1.3B 100%    0/s ] 16G   ways: [ 251k   0% 1.5k/s ] rels: [    0   0%    0/s ] features: [  74M 9.4k/s ] 5.7G  blocks: [ 170k  91%   <1/s ]
    cpus: 0.2 gc:  0% heap: 2.2G/6.4G direct: 54M postGC: 960M relInfo:  69M mpGeoms: 824k
    read( 0%) ->   (11/22) -> process( 0%  1%  2%  1%  3%  2%  3%  3%  1%  1%  1%) -> (4.9k/62k) -> write( 0%)
0:13:18 INF [osm_pass2] -  nodes: [ 1.3B 100%    0/s ] 16G   ways: [ 264k   0% 1.2k/s ] rels: [    0   0%    0/s ] features: [  74M 8.6k/s ] 5.7G  blocks: [ 170k  91%   <1/s ]
    cpus: 0.2 gc:  0% heap: 2.7G/6.4G direct: 54M postGC: 960M relInfo:  69M mpGeoms: 824k
    read( 0%) ->   (11/22) -> process( 1%  1%  1%  1%  1%  1%  2%  2%  1%  0%  3%) -> (4.8k/62k) -> write( 0%)
0:13:28 INF [osm_pass2] -  nodes: [ 1.3B 100%    0/s ] 16G   ways: [ 273k   0%  813/s ] rels: [    0   0%    0/s ] features: [  74M 5.3k/s ] 5.7G  blocks: [ 170k  91%   <1/s ]
    cpus: 0.1 gc:  0% heap: 3G/6.4G direct: 54M postGC: 960M relInfo:  69M mpGeoms: 824k
    read( 0%) ->   (11/22) -> process( 0%  2%  1%  1%  2%  1%  1%  1%  1%  0%  1%) -> (5.5k/62k) -> write( 0%)
0:13:39 INF [osm_pass2] -  nodes: [ 1.3B 100%    0/s ] 16G   ways: [ 283k   0%  989/s ] rels: [    0   0%    0/s ] features: [  74M 6.8k/s ] 5.7G  blocks: [ 170k  91%   <1/s ]
    cpus: 0.2 gc:  0% heap: 3.4G/6.4G direct: 54M postGC: 960M relInfo:  69M mpGeoms: 824k
    read( 0%) ->   (11/22) -> process( 0%  3%  1%  2%  1%  2%  1%  2%  2%  0%  1%) -> (4.8k/62k) -> write( 0%)
0:13:49 INF [osm_pass2] -  nodes: [ 1.3B 100%    0/s ] 16G   ways: [ 293k   0%  976/s ] rels: [    0   0%    0/s ] features: [  74M 6.6k/s ] 5.7G  blocks: [ 170k  91%   <1/s ]
    cpus: 0.2 gc:  0% heap: 1.2G/6.4G direct: 54M postGC: 958M relInfo:  69M mpGeoms: 824k
    read( 0%) ->   (11/22) -> process( 1%  1%  1%  1%  2%  2%  2%  2%  1%  1%  1%) -> (6.3k/62k) -> write( 0%)
0:14:00 INF [osm_pass2] -  nodes: [ 1.3B 100%    0/s ] 16G   ways: [ 307k   0% 1.3k/s ] rels: [    0   0%    0/s ] features: [  75M 9.3k/s ] 5.7G  blocks: [ 170k  91%    0/s ]
    cpus: 0.1 gc:  0% heap: 1.7G/6.4G direct: 54M postGC: 958M relInfo:  69M mpGeoms: 824k
    read( 0%) ->   (11/22) -> process( 0%  1%  1%  1%  2%  1%  1%  1%  1%  1%  1%) -> (4.3k/62k) -> write( 0%)
0:14:11 INF [osm_pass2] -  nodes: [ 1.3B 100%    0/s ] 16G   ways: [ 326k   0% 1.7k/s ] rels: [    0   0%    0/s ] features: [  75M  12k/s ] 5.7G  blocks: [ 170k  91%   <1/s ]
    cpus: 0.2 gc:  0% heap: 2.4G/6.4G direct: 54M postGC: 958M relInfo:  69M mpGeoms: 874k
    read( 0%) ->   (11/22) -> process( 1%  1%  1%  2%  2%  2%  2%  1%  1%  2%  1%) -> (4.9k/62k) -> write( 1%)
^C

@msbarry
Copy link
Contributor

msbarry commented Aug 18, 2024

OK got it thanks for the info. The HDD is probably the issue here. One workaround would be to use --storage=ram and use -Xmx30g to force it to use in-memory storage, but that will only let you go so far. Planetiler also speeds up random access memory-mapped file reads using madvise on linux but that doesn't work on windows, I wonder if there's an equivalent that might help?

Something like this might work:

  1. pass1 nodes: write out (id, location) pairs to a file in order like it already does
  2. pass1 ways: write out (way ID, node ID) pairs in order to another file ordered by way ID
  3. external merge sort (way, node) pairs from (2) by node ID
  4. iterate through the 2 files from (1) and (3) together and write out (way ID, node ID, location)
  5. external merge sort output of (4) by way ID
  6. then on pass 2 ways, just have each reader scan through that sorted list from (5) to get the node locations for the next way, or collapse that down a single thread that passes off full way geometries to the workers

The 2 merge sorts and sequential scans should be more HDD-friendly, but that would be a good amount of work and pretty low priority since most people are using an SSD. For now I just updated the README to recommend using SSD but feel free to open a PR or report back if you have bandwidth to prototype and see if it helps on your setup.

@forgotPassword
Copy link
Author

Running -Xmx21g --nodemap_storage=ram I can now proccess US, Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants