You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hallo Daniel,
thanks for your nice toolbox. Seem's like setting model=998 it also works for a Hummingbird Helix 5 SI.
However, I got some problems when trying to process the data:
Trying to process it as a whole, it get this error:
philipp@azurit:~/Humminbird-Sonar/Aufnahmen/Heidkate/Hk001$ python Heidkate01.py
Input file is R00011.DAT
Son files are in .
cs2cs arguments are epsg:3857
Draft: 0.3
Celerity of sound: 1495.0 m/s
Transducer length is 0.108 m
Bed picking is auto
Only 1 chunk will be produced
Data is from the 998 series
Bearing will be calculated from coordinates
Bearing will be filtered
Checking the epsg code you have chosen for compatibility with Basemap ...
... epsg code compatible
WARNING: Because files have to be read in byte by byte,
this could take a very long time ...
low-frq. downward scan not available
port and starboard scans are different sizes ... rectifying
Traceback (most recent call last):
File "Heidkate01.py", line 140, in
PyHum.read(humfile, sonpath, cs2cs_args, c, draft, doplot, t, bedpick, flip_lr, model, calc_bearing, filt_bearing, chunk) #cog
File "/usr/local/lib/python2.7/dist-packages/PyHum/_pyhum_read.py", line 503, in read
if np.shape(port_fp[0])[1] > np.shape(star_fp[0])[1]:
IndexError: tuple index out of range
When I use the chunk method, chunk=d100, Pyhum.read and Pyhum.correct work properly, however PyHum.map isn't able to process all chunks. After about 10 to 12 chunks, the memory overruns, and I get the following message:
getting point cloud ...
error on chunk 11
When I change the corresponding for-loop in _pyhum_map.py on line 295 to:
for p in range(10,len(star_fp)):
I can process the next ~ 10 chunks, until the mempory is full again, so it's not an data issue.
Thanks for help
Philipp
The text was updated successfully, but these errors were encountered:
The issue with the read module might be related to slight differences in how the HELIX unit records the data. For example, I notice that your port and starboard scans were different sizes, which is unusual and might be breaking another part of the code. If you provide an example set of files, I will investigate. I need to spend some time ensuring full compatibility with the newer ONIX, HELIX and MEGA series. Having data examples will always help. It will also help me figure out why the map module is choking on memory. Thanks
New version has support for HELIX, however the files you provided appear to be corrupted, or otherwise are different from other Helix model data I have tried. Do you have another example?
Hallo Daniel,
thanks for your nice toolbox. Seem's like setting model=998 it also works for a Hummingbird Helix 5 SI.
However, I got some problems when trying to process the data:
Trying to process it as a whole, it get this error:
philipp@azurit:~/Humminbird-Sonar/Aufnahmen/Heidkate/Hk001$ python Heidkate01.py
Input file is R00011.DAT
Son files are in .
cs2cs arguments are epsg:3857
Draft: 0.3
Celerity of sound: 1495.0 m/s
Transducer length is 0.108 m
Bed picking is auto
Only 1 chunk will be produced
Data is from the 998 series
Bearing will be calculated from coordinates
Bearing will be filtered
Checking the epsg code you have chosen for compatibility with Basemap ...
... epsg code compatible
WARNING: Because files have to be read in byte by byte,
this could take a very long time ...
low-frq. downward scan not available
port and starboard scans are different sizes ... rectifying
Traceback (most recent call last):
File "Heidkate01.py", line 140, in
PyHum.read(humfile, sonpath, cs2cs_args, c, draft, doplot, t, bedpick, flip_lr, model, calc_bearing, filt_bearing, chunk) #cog
File "/usr/local/lib/python2.7/dist-packages/PyHum/_pyhum_read.py", line 503, in read
if np.shape(port_fp[0])[1] > np.shape(star_fp[0])[1]:
IndexError: tuple index out of range
When I use the chunk method, chunk=d100, Pyhum.read and Pyhum.correct work properly, however PyHum.map isn't able to process all chunks. After about 10 to 12 chunks, the memory overruns, and I get the following message:
getting point cloud ...
error on chunk 11
When I change the corresponding for-loop in _pyhum_map.py on line 295 to:
for p in range(10,len(star_fp)):
I can process the next ~ 10 chunks, until the mempory is full again, so it's not an data issue.
Thanks for help
Philipp
The text was updated successfully, but these errors were encountered: