Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update transformers test requirements #22911

Open
wants to merge 4 commits into
base: main
Choose a base branch
from
Open

Conversation

tianleiwu
Copy link
Contributor

@tianleiwu tianleiwu commented Nov 21, 2024

Description

  • Install PyTorch for transformers tests. The installation is before python tests so that it can use torch if needed.
  • Update protobuf and numpy versions used in transformers test.

Motivation and Context

Currently, transformers tests are enabled in the following CI pipelines:

  • Linux CPU CI Pipeline (torch for cpu-only)
  • Linux GPU CI Pipeline (torch for cuda 12)
  • Windows GPU CUDA CI Pipeline (torch for cpu-only right now, note that we might change it to torch for cuda 12 in the future).

For ROCm CI Pipeline, transformer tests are enabled but skipped since onnx package is not installed in CI.

Previously, torch was not installed before python tests, so some tests depending on torch were skipped like test_bind_onnx_types_not_supported_by_numpy or test user_compute_stream.

In this PR, we changed build.py to install torch before running python tests.

@tianleiwu tianleiwu marked this pull request as draft November 21, 2024 01:14
@tianleiwu tianleiwu force-pushed the tlwu/ci_python_test branch 2 times, most recently from 46a687e to b7cc8e8 Compare November 21, 2024 05:27
@tianleiwu tianleiwu marked this pull request as ready for review November 21, 2024 22:23
if args.enable_transformers_tool_test and not args.disable_contrib_ops and not args.use_rocm:
# PyTorch is required for transformers tests, and optional for some python tests.
# Install cpu only version of torch when cuda is not enabled in Linux.
extra = [] if args.use_cuda and is_linux() else ["--index-url", "https://download.pytorch.org/whl/cpu"]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we always pass the --index-url? From the Get Started page, it is used for Linux CUDA and there is a link for all three CUDA versions.

Copy link
Contributor Author

@tianleiwu tianleiwu Nov 21, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The first commit actually used that. For some reason, there was error in importing torch in Windows GPU ci pipeline.

Later, I changed it since --index-url will change if pytorch changes cuda version (like to 12.6). If we do not set index URL, it will always be the latest version in pypi.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants