Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(ops): Fix inconsistent padding calculation in PyTorch backend ops #20774

Open
wants to merge 5 commits into
base: master
Choose a base branch
from

Conversation

harshaljanjani
Copy link
Contributor

Was able to still reproduce the error, the PyTorch backend had inconsistent behavior between static shape inference and dynamic execution for pooling operations, particularly with 'same' padding and non-unit strides, figured that the root cause was by incorrect padding calculation logic that didn't properly handle asymmetric padding cases.

Key changes:

  • Rewrote _compute_padding_length() to handle stride-based padding
  • Fixed padding calculation to properly support asymmetric padding cases
  • Standardize channels_first/channels_last conversion in pooling ops
  • Cleaned up padding application in _apply_same_padding()
  • Added proper handling of data_format throughout pooling pipeline

This fixes the issue where MaxPooling2D with 'same' padding would produce different shapes between compute_output_shape() and actual execution (e.g. (1,5,2,2) vs (1,5,2,1)) #20235.

Rebased on top of @sachinprasadhs's September 2024 PR to incorporate latest keras:master changes #20270.

sachinprasadhs and others added 5 commits January 17, 2025 10:43
Was able to still reproduce the error, the PyTorch backend had inconsistent
behavior between static shape inference and dynamic execution for pooling
operations, particularly with 'same' padding and non-unit strides, figured
that the root cause was by incorrect padding calculation logic that didn't
properly handle asymmetric padding cases.

Key changes:
- Rewrote _compute_padding_length() to handle stride-based padding
- Fixed padding calculation to properly support asymmetric padding cases
- Standardize channels_first/channels_last conversion in pooling ops
- Cleaned up padding application in _apply_same_padding()
- Added proper handling of data_format throughout pooling pipeline

This fixes the issue where MaxPooling2D with 'same' padding would produce
different shapes between compute_output_shape() and actual execution
(e.g. (1,5,2,2) vs (1,5,2,1)).

Rebased on top of Sachin's September 2024 PR to incorporate
latest keras:master changes.
@codecov-commenter
Copy link

codecov-commenter commented Jan 17, 2025

Codecov Report

Attention: Patch coverage is 89.18919% with 4 lines in your changes missing coverage. Please review.

Project coverage is 81.99%. Comparing base (25d6d80) to head (cc697dd).

Files with missing lines Patch % Lines
keras/src/backend/torch/nn.py 89.18% 2 Missing and 2 partials ⚠️
Additional details and impacted files
@@           Coverage Diff           @@
##           master   #20774   +/-   ##
=======================================
  Coverage   81.99%   81.99%           
=======================================
  Files         555      555           
  Lines       51805    51808    +3     
  Branches     8012     8013    +1     
=======================================
+ Hits        42475    42482    +7     
+ Misses       7378     7376    -2     
+ Partials     1952     1950    -2     
Flag Coverage Δ
keras 81.82% <89.18%> (+<0.01%) ⬆️
keras-jax 64.11% <0.00%> (-0.01%) ⬇️
keras-numpy 58.99% <0.00%> (-0.01%) ⬇️
keras-openvino 29.92% <0.00%> (-0.01%) ⬇️
keras-tensorflow 64.80% <0.00%> (-0.04%) ⬇️
keras-torch 64.18% <89.18%> (+0.08%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants