You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@MarvinTeichmann
I've tried to train multinet2.json with one gpu(1080Ti), and it seems collapse, have you use multiple GPUs?
the information as follows:
2018-12-13 17:56:15.831145: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 3.64GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2018-12-13 17:56:15.869555: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.73GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2018-12-13 17:56:16,014 INFO Segmentation Loss was used.
The text was updated successfully, but these errors were encountered:
@MarvinTeichmann
I've tried to train multinet2.json with one gpu(1080Ti), and it seems collapse, have you use multiple GPUs?
the information as follows:
2018-12-13 17:56:15.831145: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 3.64GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2018-12-13 17:56:15.869555: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.73GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2018-12-13 17:56:16,014 INFO Segmentation Loss was used.
The text was updated successfully, but these errors were encountered: