Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: GPU complex reducer prod for empty lists #3235

Merged
merged 2 commits into from
Sep 12, 2024

Conversation

ianna
Copy link
Collaborator

@ianna ianna commented Sep 12, 2024

fixes issue #3214

The test failure is due to a mismatch between the expected and actual results in the assertion cpt.assert_allclose. The issue arises when comparing the results of the ak.prod operation on arrays converted to the CUDA backend (cuda_depth1) and the original CPU backend (depth1). Specifically, there's a discrepancy in the third element of the arrays (which is an EmptyArray) being compared: 0j versus (1.4641 - 0j).

The product of no element is defined as 1.

@ianna ianna requested a review from jpivarski September 12, 2024 10:56
Copy link

codecov bot commented Sep 12, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 82.23%. Comparing base (b749e49) to head (56e812c).
Report is 151 commits behind head on main.

Additional details and impacted files

see 105 files with indirect coverage changes

@ianna ianna changed the title fix: make sure that both CPU and GPU produce identical results fix: GPU complex reducer prod for empty lists Sep 12, 2024
@ianna ianna added the gpu Concerns the GPU implementation (backend = "cuda') label Sep 12, 2024
Copy link
Member

@jpivarski jpivarski left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know why this makes a difference. As I understand it, T is float when the kernel is applied to complex64 and T is double when the kernel is applied to complex128. C-style casting should do a conversion-cast to the appropriate type, regardless of whether the constant was initially int or float (and it should do it at compile-time).

root [0] (float)1.0f
(float) 1.00000f
root [1] (double)1.0f
(double) 1.0000000
root [2] (float)1
(float) 1.00000f
root [3] (double)1
(double) 1.0000000

But if it does make a difference, what test demonstrates that? Just adding a test that fails for the old code and passes for the new code would be convincing enough.

@ianna
Copy link
Collaborator Author

ianna commented Sep 12, 2024

I don't know why this makes a difference. As I understand it, T is float when the kernel is applied to complex64 and T is double when the kernel is applied to complex128. C-style casting should do a conversion-cast to the appropriate type, regardless of whether the constant was initially int or float (and it should do it at compile-time).

root [0] (float)1.0f
(float) 1.00000f
root [1] (double)1.0f
(double) 1.0000000
root [2] (float)1
(float) 1.00000f
root [3] (double)1
(double) 1.0000000

But if it does make a difference, what test demonstrates that? Just adding a test that fails for the old code and passes for the new code would be convincing enough.

Please, check the test that was reported in #3214 - it does not fail for me anymore. However, I was thinking along the same lines: why it makes a difference? I did some more digging. It turns out that we have the same or similar issue with ak.prod on CPU: #3236.

Copy link
Member

@jpivarski jpivarski left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(In our meeting, @ianna pointed out that one of the tests was failing without this fix. So we do have a test that is sensitive to it.)

All tests-cuda pass on my GPU as well, so now we've double-tested it.

I'll merge this because it was passing all CI tests before the set of required tests was partially updated. Partial because the new tests in #3217 haven't been run in a while, so they're not in the drop-down menu to add them until I run them again. That was taking a while, so I came over here to take care of this PR, without the other one being done first.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
gpu Concerns the GPU implementation (backend = "cuda')
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants