-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Plus notation for .whl filenames overlap between pytorch patch versions. #198
Comments
They might actually be the same wheel, looks like only 1.13.0 is in the build matrix: https://github.com/pyg-team/pyg-lib/blob/0.1.0/.github/workflows/building.yml |
That's very interesting. I'm not even sure how 1.13.1 ended up being uploaded as there's no 1.13.1 torch-version specified. 🤔 |
We just indeed symlink them since there was a confusion for people that tried, e.g., |
would a url scheme in line with the plus notation like |
Yeah, this is a good idea, although we likely still want to maintain old behavior such that something like
still works (so it's not a top-prio for me at the moment). |
Good morning ☀️ . Pardon me for jumping in here 🙇 I believe I can add some extra thoughts and a possible solution to this problem that should solve it once and for all. Problem statement I notice that the current setup works well for jupyter notebook users (as provided by your example above) where it is common to install dependencies in separate cells. This allows users to first install torch (for whatever cuda version) and then later use the However, when these libraries need to be included in other downstream libraries is becomes more difficult. I maintain a rather large (private) ML library which supports a range of torch versions. The requirements.txt can be thought of as;
Currently it is not possible to install this requirements.txt without hardcoding
There's two issues with this approach:
Inspiration Coincidentally
Note; Possible solution I think the least confusing method would be to do what
This then allows:
This has some advantages over the current approach:
In short I see two relatively simple pieces of work to make this happen:
Is there anything I haven't considered in this proposal? Your guidance on this topic would be much appreciated! 🙇 (p.s. I'm happy to work on this if either of you are busy at the moment!) |
I'm not sure what pip's behaviour will be when pip sees links to both There is another potential issue, where I'm afraid you'll have to hardcode the plus-notation in your reqs.txt: if For ray, the docker image for gpu is based on the cpu image, so there you have this scenario and you have to hard code plus notation for the second install: ray-project/ray#26072 |
I'll do some testing this week to see. 🤔 I'm also interested. Probably I can compile
Thank you for pointing this out. My hope is that we could do |
To solve this we should do
To follow your example
I've opened an issue for https://download.pytorch.org/whl/cpu/torch_stable.html to also include local version identifiers (see pytorch/pytorch#95135) so that both of them will start working as expected. @ddelange do you agree that this is a safe change? I really would like to see this as two separate issues. Adding How we could put those wheels together on 1 .html page and how |
I'm sorry for the following; I'm doing some testing right now and coming up with very strange results on how I'll come up with some clear proof that this would or would not work. 🙌 Thank you for the questions! |
afaik, if you want to specify a local version label in your package's Requires-Dist (or in a requirements.txt passed directly to pip), you can only do so in a pinned/exact manner. You can not use wildcards in neither the version, nor in the local version label: $ pip wheel --no-deps 'torch==1.13.*+cu116' --extra-index-url 'https://download.pytorch.org/whl/cu116'
Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/, https://download.pytorch.org/whl/cu116
ERROR: Could not find a version that satisfies the requirement torch==1.13.*+cu116 (from versions: 1.4.0, 1.5.0, 1.5.1, 1.6.0, 1.7.0, 1.7.1, 1.8.0, 1.8.1, 1.9.0, 1.9.1, 1.10.0, 1.10.1, 1.10.2, 1.11.0, 1.12.0, 1.12.0+cu116, 1.12.1, 1.12.1+cu116, 1.13.0, 1.13.0+cu116, 1.13.1, 1.13.1+cu116)
ERROR: No matching distribution found for torch==1.13.*+cu116 And you can not specify version ranges in combination with a local version label. It looks like the specified local version label will simply be ignored (here it downloads a cu116 wheel): $ pip wheel --no-deps 'torch>=1.12,<=1.13.2+cpu' --extra-index-url 'https://download.pytorch.org/whl/cpu' --extra-index-url 'https://download.pytorch.org/whl/cu116'
Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/, https://download.pytorch.org/whl/cpu, https://download.pytorch.org/whl/cu116
Collecting torch<=1.13.2+cpu,>=1.12
Downloading https://download.pytorch.org/whl/cu116/torch-1.13.1%2Bcu116-cp38-cp38-linux_x86_64.whl (1977.9 MB) So your hands are kind of tied when it comes to local version labels on your side I think.
If this is indeed the case, a PR @ pyg could make sense to add pytorch ranges (without local version labels) to the wheels, such that your package doesn't need to specify any local version labels, and your users can control cuda/cpu exclusively over the URLs passed to pip. The problem I mentioned with existing installations still persists though, not sure whether that is a pip bug or feature 😅 |
🐛 Describe the bug
As mentioned in #66 (comment) there seems to be a regression of sorts to the + notation for .whl generation.
Copied from other issue
I believe to have found an edge case which resurfaces the original comment of "currently, the filenames are identical (just a different URL path), which confuses pip into thinking there is no need for a re-install"
The torch version in the + metadata is specified up to the minor version and therefore filenames can be identical between torch versions.
In https://data.pyg.org/whl/torch-1.13.1%2Bcu116.html (for 1.13.1)
pyg_lib-0.1.0+pt113cu116-cp310-cp310-linux_x86_64.whl
In https://data.pyg.org/whl/torch-1.13.0%2Bcu116.html (for 1.13.0)
pyg_lib-0.1.0+pt113cu116-cp310-cp310-linux_x86_64.whl
This then resurfaces all the mentioned 'problems' with pip not reinstalling when it clearly should when the torch version (or cuda) version is updated. Could we fix that by setting the torch version to the patch version?
Thank you for looking into this!
Environment
N/A
The text was updated successfully, but these errors were encountered: