Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Explain torch compile error messages, improve PTQ vs QAT doc #730

Merged
merged 17 commits into from
Jun 17, 2024

Conversation

@andrei-stoian-zama andrei-stoian-zama requested a review from a team as a code owner June 13, 2024 13:41
@cla-bot cla-bot bot added the cla-signed label Jun 13, 2024
Copy link
Collaborator

@RomanBredehoft RomanBredehoft left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks ! But I don't think it entirely closes https://github.com/zama-ai/concrete-ml-internal/issues/4414 as this issue is more about the error message itself. But yes great to add it in the doc

@RomanBredehoft
Copy link
Collaborator

also might be worth to consider https://github.com/zama-ai/concrete-ml-internal/issues/4498 in this PR as well, if possible 🙏

Copy link
Contributor

@yuxizama yuxizama left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suggest the following format for users to easily find solutions when they encounter some errors. Wdyt?

Common Compilation erros

Bit width limits

Error message: This is the error message
Cause: This error can occur when rounding_threshold_bits is not used and accumulated intermediate values in the computation exceed 16-bits. TLU input maximum bit-width is exceeded
Possible solutions:

  • Reduce quantization n_bits. However, this may reduce accuracy. When quantization n_bits must be below 6, it is best to use Quantization Aware Training.
  • Use rounding_threshold_bits. This feature is described here. It is recommended to use the fhe.Exactness.APPROXIMATE setting, and set the rounding bits to 1 or 2 bits higher than the quantization n_bits
  • Use pruning

Crypto-parameters

Error message: This is the error message
Cause: This is the cause
Possible sulutions:

  • Solution 1
  • Solution 2

Quantization

Error message: This is the error message
Cause: This is the cause
Possible sulutions:

  • Solution 1
  • Solution 2

RomanBredehoft
RomanBredehoft previously approved these changes Jun 17, 2024
docs/deep-learning/torch_support.md Outdated Show resolved Hide resolved
docs/deep-learning/fhe_assistant.md Outdated Show resolved Hide resolved
docs/deep-learning/fhe_assistant.md Outdated Show resolved Hide resolved
docs/deep-learning/fhe_assistant.md Outdated Show resolved Hide resolved
docs/deep-learning/fhe_assistant.md Outdated Show resolved Hide resolved
docs/deep-learning/torch_support.md Outdated Show resolved Hide resolved
docs/deep-learning/torch_support.md Outdated Show resolved Hide resolved
docs/deep-learning/torch_support.md Outdated Show resolved Hide resolved
docs/deep-learning/torch_support.md Outdated Show resolved Hide resolved
docs/deep-learning/torch_support.md Show resolved Hide resolved
RomanBredehoft
RomanBredehoft previously approved these changes Jun 17, 2024
RomanBredehoft
RomanBredehoft previously approved these changes Jun 17, 2024
Copy link

Coverage passed ✅

Coverage details

---------- coverage: platform linux, python 3.8.18-final-0 -----------
Name    Stmts   Miss  Cover   Missing
-------------------------------------
TOTAL    7885      0   100%

60 files skipped due to complete coverage.

@andrei-stoian-zama andrei-stoian-zama merged commit b5cd1ba into main Jun 17, 2024
16 checks passed
@andrei-stoian-zama andrei-stoian-zama deleted the docs/discuss_error_messages branch June 17, 2024 15:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants