-
Notifications
You must be signed in to change notification settings - Fork 123
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Breaking Change in .NET 9.0 in int-to-fp conversion causes unit tests to fail #228
Comments
It's worth noting that this isn't strictly true. It would not produce this result on Arm64 or WASM machines. It similarly would not produce this result on an x64 machine with AVX512 support. Historically the behavior of floating-point to integer conversions has been "undefined" when the source value cannot be represented in the destination after truncation. This is explicitly called out in the C# language spec, the runtime spec (ECMA-335), the IEEE 754 floating-point specification, and is typical in other languages as well. Newer languages and runtimes have recognized this issue and started normalizing the behavior towards saturation instead. This best fits with the more general IEEE 754 behavior that exists for almost all other functions where "the result is computed as if to infinite precision and unbounded range, prior to rounding to the nearest representable result". This is correspondingly required by languages like Rust or platforms like WASM. It is similarly the behavior that platforms like Arm64 have opted to implement by default. .NET correspondingly made a push towards determinism and opted to mirror the behavior the industry is standardizing against. We did expose some new |
More details about this break can be seen here: https://learn.microsoft.com/en-us/dotnet/core/compatibility/jit/9.0/fp-to-integer I'm also happy to answer any additional questions on the topic if they exist. |
Thanks for reporting this @kpreisser, I'll take a look. |
@tannergooding Is it expected that I get different results in .NET 9 depending on whether the cast is a compile-time constant or not? i.e. [MethodImpl(MethodImplOptions.NoInlining)]
double GetNegativeInfinity() => double.NegativeInfinity;
[MethodImpl(MethodImplOptions.NoInlining)]
double GetPositiveInfinity() => double.PositiveInfinity;
// .NET 9 (expected saturating behavior)
Assert.AreEqual(2147483647, unchecked((int)GetPositiveInfinity()));
Assert.AreEqual(-2147483648, unchecked((int)GetNegativeInfinity()));
Assert.AreEqual(4294967295u, unchecked((uint)GetPositiveInfinity()));
Assert.AreEqual(0u, unchecked((uint)GetNegativeInfinity()));
// .NET 9 (compile-time constants)
Assert.AreEqual(0, unchecked((int)double.PositiveInfinity));
Assert.AreEqual(0, unchecked((int)double.NegativeInfinity));
Assert.AreEqual(0u, unchecked((uint)double.PositiveInfinity));
Assert.AreEqual(0u, unchecked((uint)double.NegativeInfinity)); Here's the behavior when running on .NET 6: // .NET 6 (old behavior)
Assert.AreEqual(-2147483648, unchecked((int)GetPositiveInfinity()));
Assert.AreEqual(-2147483648, unchecked((int)GetNegativeInfinity()));
Assert.AreEqual(0u, unchecked((uint)GetPositiveInfinity()));
Assert.AreEqual(0u, unchecked((uint)GetNegativeInfinity()));
// .NET 6 (compile-time constants)
Assert.AreEqual(0, unchecked((int)double.PositiveInfinity));
Assert.AreEqual(0, unchecked((int)double.NegativeInfinity));
Assert.AreEqual(0u, unchecked((uint)double.PositiveInfinity));
Assert.AreEqual(0u, unchecked((uint)double.NegativeInfinity)); It's surprising to me that these should be different, but I guess at least it's not a regression... :-) |
Yes at least for the time being. Roslyn needed to ensure cross-machine determinism before the runtime made a decision and opted to just make all undefined results produce |
Seems a shame you (edit: or rather, the .NET team) couldn't get both breaking changes out of the way in one release. Ah well. Thanks for taking the time to reply. |
@kpreisser I've published a fix to NuGet under the version 3.2.8. Let me know if you have any issues. |
.NET 9.0 introduced breaking changes in JIT behavior about floating-point-to-integer conversion, which causes a number of unit tests to fail, e.g.
BitwiseAnd
. This may also be the reason why the TypeScript Compiler (when running with Jurassic) is producing incorrect errors about interfaces being extended incorrectly; see dotnet/runtime#110004.For example, the
BitwiseAnd
test uses the following IL to load-7
and convert it touint
:or in C#:
Previously (up to .NET 8.0), this produced
0xFFFFFFF9
, but in .NET 9.0, this results in0
. It looks like the conversion fromdouble
toint
/uint
and other integer types might need to be adjusted, for example inReflectionEmitILGenerator.ConvertToUnsignedInteger()
and inTypeConverter.To[U]Int...()
methods.The text was updated successfully, but these errors were encountered: