You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The issue was closed and to do not open it again I write here to answer your this question
Is it a common scenario to calculate the power of negative numbers?
Yes. If I calculate power any tensors float32 type with negative number, all negative number turn to NaN, no metter what powet is ( equal to 2 or more then 2 or odd).
But if initialize tensor with float 64, everything will be fine.
Due to the special function unit is limited resource in GPU, you should use multiply if your power is small integer.
Ok, I got it, I can multiply tensors on itself, but it is really stupid, and ruined pipeline) And in the code from someone else where is normal TF I'll get NaN anyway and not only NaN in calulate power but optimizers and other stuff uder the hood just may not work properly.
So what shoud I do?
I don't want to downgrade my Cuda и СuDNN, I just want to use the last ones with last TF without compile it by myself.
And by the way, thanks you for doing it! Many people, I'm sure, aprecciate for that.
The text was updated successfully, but these errors were encountered:
After the 1.7.0 release, I will use SSE2 and AVX2 to build the wheel. SSE2 will not use --fast-math to have the same behavior as the official pip version. AVX2 uses --fast-math to speed up.
Hi!
I'm from here issue tensorflow repo.
The issue was closed and to do not open it again I write here to answer your this question
Yes. If I calculate power any tensors float32 type with negative number, all negative number turn to NaN, no metter what powet is ( equal to 2 or more then 2 or odd).
But if initialize tensor with float 64, everything will be fine.
Ok, I got it, I can multiply tensors on itself, but it is really stupid, and ruined pipeline) And in the code from someone else where is normal TF I'll get NaN anyway and not only NaN in calulate power but optimizers and other stuff uder the hood just may not work properly.
So what shoud I do?
I don't want to downgrade my Cuda и СuDNN, I just want to use the last ones with last TF without compile it by myself.
And by the way, thanks you for doing it! Many people, I'm sure, aprecciate for that.
The text was updated successfully, but these errors were encountered: