You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When I evaluate the sample, I attempt to pass the argument
--batch_size = N
for a range of N, but I receive this error
2018-05-01 22:46:06.579010: W tensorflow/core/framework/op_kernel.cc:1158] Resource exhausted: OOM when allocating tensor with shape[3,3,128,256]
In my opinion, it doesn't seem like the code is responsive to my argument. What is the correct way to change the batch size for BNN_cifar10?
The text was updated successfully, but these errors were encountered:
In the code:def train(model, data, batch_size=128, learning_rate=FLAGS.learning_rate, log_dir='./log', checkpoint_dir='./checkpoint', num_epochs=-1):
seems batchsize not get from
When I evaluate the sample, I attempt to pass the argument
--batch_size = N
for a range of N, but I receive this error
2018-05-01 22:46:06.579010: W tensorflow/core/framework/op_kernel.cc:1158] Resource exhausted: OOM when allocating tensor with shape[3,3,128,256]
In my opinion, it doesn't seem like the code is responsive to my argument. What is the correct way to change the batch size for BNN_cifar10?
The text was updated successfully, but these errors were encountered: