Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[MRG] Sinkhorn gradient last step #693

Merged
merged 17 commits into from
Nov 21, 2024

Conversation

SoniaMaz8
Copy link
Contributor

@SoniaMaz8 SoniaMaz8 commented Nov 19, 2024

Types of changes

I added a grad=last_step feature to the ot.solvers.solve function to limit the gradient computation to the last iteration of Sinkhorn instead of computing the gradient for every iteration.

Motivation and context / Related issue

This change is required in the case where computing the gradient for all the Sinkhorn iterations is too heavy.

How has this been tested (if it applies)

I created a test_solve_last_step test to check that the gradient is different when using grad=last_step and grad=autodiff.

PR checklist

  • I have read the CONTRIBUTING document.
  • The documentation is up-to-date with the changes I made (check build artifacts).
  • All tests passed, and additional code has been covered with new tests.
  • I have added the PR and Issue fix to the RELEASES.md file.

Copy link

codecov bot commented Nov 19, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 97.06%. Comparing base (6311e25) to head (0e985b2).
Report is 1 commits behind head on master.

Additional details and impacted files
@@            Coverage Diff             @@
##           master     #693      +/-   ##
==========================================
+ Coverage   97.05%   97.06%   +0.01%     
==========================================
  Files          98       98              
  Lines       19877    19955      +78     
==========================================
+ Hits        19292    19370      +78     
  Misses        585      585              
---- 🚨 Try these New Features:

Copy link
Collaborator

@rflamary rflamary left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is nice, a few comments

ot/solvers.py Outdated Show resolved Hide resolved
ot/solvers.py Outdated Show resolved Hide resolved
ot/solvers.py Show resolved Hide resolved
Copy link
Collaborator

@rflamary rflamary left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is very nice, a feww remaining comments

potentials = (log["log_u"], log["log_v"])

if grad == "last_step":
loga = nx.log(a0)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe a few comments do dicuss the sinkhorn iettaration here

@@ -412,7 +412,10 @@ def solve(
potentials = (log["u"], log["v"])

elif reg_type.lower() in ["entropy", "kl"]:
if grad == "envelope": # if envelope then detach the input
if grad in [
"envelope",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could you also add "detach" option here and in teh documengation? seems like a nice feature that can be practical

@rflamary rflamary changed the title [WIP] Sinkhorn gradient last step [MRG] Sinkhorn gradient last step Nov 21, 2024
@rflamary rflamary merged commit db28f4b into PythonOT:master Nov 21, 2024
16 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants