Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

combine and refactor requirements.txt #5

Open
maxftz opened this issue Feb 18, 2025 · 1 comment
Open

combine and refactor requirements.txt #5

maxftz opened this issue Feb 18, 2025 · 1 comment

Comments

@maxftz
Copy link
Collaborator

maxftz commented Feb 18, 2025

IMO with more requirements files it becomes increasingly difficult to manage which dependencies are installed in which environment.

I believe a better way to manage mutually exclusive dependencies would be to use a single requirements.txt file with environment markers.

For example (adapting from https://stackoverflow.com/a/72949970), if I take a requirements.txt like this:

# for CUDA 11.8 torch on Linux
--extra-index-url https://download.pytorch.org/whl/cu118; sys_platform == "linux"
torch; sys_platform == "linux"
torchvision; sys_platform == "linux"
pytorch-lightning; sys_platform == "linux"

# for MPS accelerated torch on Mac
torch; sys_platform == "darwin"
torchvision; sys_platform == "darwin"
pytorch-lightning; sys_platform == "darwin"

(Note that this is just a quick example; obviously versions should be pinned -- or better yet, there should be a proper lock file!)

I can pip install -r requirements.txt on either macOS or Linux (with GPU), without the need for different files.
The resulting environments would differ in the expected ways:

--- pip_macos.lst	2025-02-06 06:42:28
+++ pip_linux_gpu.lst	2025-02-06 06:42:31
@@ -13,9 +13,22 @@
 multidict==6.1.0
 networkx==3.4.2
 numpy==2.2.2
+nvidia-cublas-cu12==12.4.5.8
+nvidia-cuda-cupti-cu12==12.4.127
+nvidia-cuda-nvrtc-cu12==12.4.127
+nvidia-cuda-runtime-cu12==12.4.127
+nvidia-cudnn-cu12==9.1.0.70
+nvidia-cufft-cu12==11.2.1.3
+nvidia-curand-cu12==10.3.5.147
+nvidia-cusolver-cu12==11.6.1.9
+nvidia-cusparse-cu12==12.3.1.170
+nvidia-cusparselt-cu12==0.6.2
+nvidia-nccl-cu12==2.21.5
+nvidia-nvjitlink-cu12==12.4.127
+nvidia-nvtx-cu12==12.4.127
 packaging==24.2
 pillow==11.1.0
-pip==24.3.1
+pip==24.0
 propcache==0.2.1
 pytorch-lightning==2.5.0.post0
 PyYAML==6.0.2
@@ -25,5 +38,6 @@
 torchmetrics==1.6.1
 torchvision==0.21.0
 tqdm==4.67.1
+triton==3.2.0
 typing_extensions==4.12.2
 yarl==1.18.3

Originally posted by @psibre in #4 (comment)

@maxftz
Copy link
Collaborator Author

maxftz commented Feb 18, 2025

@psibre I'll consider this option, but we might also want to use something more modern and flexible, such as using pyproject.toml with a tool like uv.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant