You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
FBGEMM_GPU (FBGEMM GPU Kernels Library) is a collection of high-performance PyTorch GPU operator libraries for training and inference. The library provides efficient table batched embedding bag, data layout transformation, and quantization supports. This project hosts the version releases of FBGEMM_GPU.
Reasons for the request
Our project targets multiple CUDA architectures, hence the binary blob size is large. Recently, it has exceeded 500 MiB, so we would like to increase the upload size limit from 500 MiB to 1000 MiB.
Code of Conduct
I agree to follow the PSF Code of Conduct
The text was updated successfully, but these errors were encountered:
Project URL
https://pypi.org/project/fbgemm-gpu
Does this project already exist?
New Limit
1000 MiB
Update issue title
Which indexes
PyPI
About the project
FBGEMM_GPU (FBGEMM GPU Kernels Library) is a collection of high-performance PyTorch GPU operator libraries for training and inference. The library provides efficient table batched embedding bag, data layout transformation, and quantization supports. This project hosts the version releases of FBGEMM_GPU.
Reasons for the request
Our project targets multiple CUDA architectures, hence the binary blob size is large. Recently, it has exceeded 500 MiB, so we would like to increase the upload size limit from 500 MiB to 1000 MiB.
Code of Conduct
The text was updated successfully, but these errors were encountered: