You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to run uncertainty_estimation file with custom dataset but i am always getting the following error:
using 3306 images for training, 364 images for validation.
Traceback (most recent call last):
File "uncertainty_estimation.py", line 184, in
run(args.net_type, args.weights_path, args.notmnist_dir)
File "uncertainty_estimation.py", line 139, in run
sample_mnist, truth_mnist = get_sample(mnist_set)
File "uncertainty_estimation.py", line 115, in get_sample
sample = transform(sample)
File "/home/sharif/PyTorch-BayesianCNN-master/venv/lib/python3.8/site-packages/torchvision/transforms/transforms.py", line 60, in call
img = t(img)
File "/home/sharif/PyTorch-BayesianCNN-master/venv/lib/python3.8/site-packages/torchvision/transforms/transforms.py", line 179, in call
return F.to_pil_image(pic, self.mode)
File "/home/sharif/PyTorch-BayesianCNN-master/venv/lib/python3.8/site-packages/torchvision/transforms/functional.py", line 219, in to_pil_image
raise ValueError('pic should be 2/3 dimensional. Got {} dimensions.'.format(pic.ndimension()))
ValueError: pic should be 2/3 dimensional. Got 4 dimensions.
The text was updated successfully, but these errors were encountered:
I am trying to run uncertainty_estimation file with custom dataset but i am always getting the following error:
using 3306 images for training, 364 images for validation.
Traceback (most recent call last):
File "uncertainty_estimation.py", line 184, in
run(args.net_type, args.weights_path, args.notmnist_dir)
File "uncertainty_estimation.py", line 139, in run
sample_mnist, truth_mnist = get_sample(mnist_set)
File "uncertainty_estimation.py", line 115, in get_sample
sample = transform(sample)
File "/home/sharif/PyTorch-BayesianCNN-master/venv/lib/python3.8/site-packages/torchvision/transforms/transforms.py", line 60, in call
img = t(img)
File "/home/sharif/PyTorch-BayesianCNN-master/venv/lib/python3.8/site-packages/torchvision/transforms/transforms.py", line 179, in call
return F.to_pil_image(pic, self.mode)
File "/home/sharif/PyTorch-BayesianCNN-master/venv/lib/python3.8/site-packages/torchvision/transforms/functional.py", line 219, in to_pil_image
raise ValueError('pic should be 2/3 dimensional. Got {} dimensions.'.format(pic.ndimension()))
ValueError: pic should be 2/3 dimensional. Got 4 dimensions.
The text was updated successfully, but these errors were encountered: