Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add PyTorch backend for MF #546

Merged
merged 9 commits into from
Nov 22, 2023
Merged

Add PyTorch backend for MF #546

merged 9 commits into from
Nov 22, 2023

Conversation

hieuddo
Copy link
Member

@hieuddo hieuddo commented Nov 21, 2023

Description

  • add PyTorch backend for MF
  • add TorchMF to examples/biased_mf.py
  • MF minor fix: it should be either user_idx or item_idx is unknown?
    if not self.knows_user(user_idx) or self.knows_item(item_idx):

Checklist:

  • I have added tests.

@hieuddo
Copy link
Member Author

hieuddo commented Nov 21, 2023

Results obtained from examples/biased_mf.py:

MAE RMSE NDCG@10 Recall@10 Train (s) Test (s)
GlobalAvg 0.9408 1.0835 0.0085 0.0023 0.0000 8.1069
MF 0.6910 0.8470 0.1058 0.0335 0.4068 36.5108
TorchMF 0.6947 0.8590 0.1014 0.0308 119.5144 10.5325
MAE RMSE NDCG@10 Recall@10 Train (s) Test (s)
GlobalAvg 0.9373 1.0797 0.0088 0.0026 0.0000 9.0318
MF 0.6911 0.8473 0.0976 0.0318 0.4288 32.9461
TorchMF 0.6962 0.8608 0.0977 0.0313 120.6941 11.7570

@hieuddo hieuddo requested a review from tqtg November 21, 2023 07:25

def forward(self, uids, iids):
ues = self.u_factors(uids)
uis = self.i_factors(iids)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do you mean ies (item embeddings) here?

@tqtg
Copy link
Member

tqtg commented Nov 22, 2023

@hieuddo I did some code refactoring to merge them into one MF model with different options of backends. The results appear to be similar to yours.

TEST:
...
           |    MAE |   RMSE | NDCG@10 | Recall@10 | Train (s) | Test (s)
---------- + ------ + ------ + ------- + --------- + --------- + --------
GlobalAvg  | 0.9426 | 1.0868 |  0.0093 |    0.0036 |    0.0000 |   6.3037
MF-cpu     | 0.6914 | 0.8494 |  0.0899 |    0.0293 |    0.1364 |   9.3481
MF-pytorch | 0.7023 | 0.8579 |  0.1422 |    0.0440 |   68.6235 |   9.4482

@hieuddo
Copy link
Member Author

hieuddo commented Nov 22, 2023

Looks great.

One minor, though: should we unify formatter settings to reduce changes due to different formatters?

I'm currently using black, autopep8, and isort altogether, with black as the default formatter. I think black is the priority because very few configurations are allowed, which makes code look the same every time; while autopep8 ignores some parts and only formats where the code doesn't follow pep8 style.

@tqtg
Copy link
Member

tqtg commented Nov 22, 2023

@hieuddo I'm also using black, and we do check PEP8 style(see Link with flake8 step). While trying to enforce convention for our core codebase, we're a bit more flexible with models to ease the contribution.

@hieuddo hieuddo merged commit 98ccb80 into PreferredAI:master Nov 22, 2023
12 checks passed
@hieuddo hieuddo deleted the mf-torch branch December 27, 2023 04:24
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants