Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SPLIT BUILD] add split build to validate binaries #1

Open
wants to merge 15 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 3 additions & 2 deletions .circleci/scripts/binary_populate_env.sh
Original file line number Diff line number Diff line change
Expand Up @@ -48,9 +48,9 @@ if [[ -z "$DOCKER_IMAGE" ]]; then
if [[ "$PACKAGE_TYPE" == conda ]]; then
export DOCKER_IMAGE="pytorch/conda-cuda"
elif [[ "$DESIRED_CUDA" == cpu ]]; then
export DOCKER_IMAGE="pytorch/manylinux-cpu"
export DOCKER_IMAGE="pytorch/manylinux-builder:cpu"
else
export DOCKER_IMAGE="pytorch/manylinux-cuda${DESIRED_CUDA:2}"
export DOCKER_IMAGE="pytorch/manylinux-builer:${DESIRED_CUDA:2}"
fi
fi

Expand Down Expand Up @@ -121,6 +121,7 @@ export DESIRED_PYTHON="$DESIRED_PYTHON"
export DESIRED_CUDA="$DESIRED_CUDA"
export LIBTORCH_VARIANT="${LIBTORCH_VARIANT:-}"
export BUILD_PYTHONLESS="${BUILD_PYTHONLESS:-}"
export USE_SPLIT_BUILD="${USE_SPLIT_BUILD:-}"
export DESIRED_DEVTOOLSET="$DESIRED_DEVTOOLSET"
if [[ "${BUILD_FOR_SYSTEM:-}" == "windows" ]]; then
export LIBTORCH_CONFIG="${LIBTORCH_CONFIG:-}"
Expand Down
15 changes: 15 additions & 0 deletions .github/workflows/validate-binaries.yml
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,13 @@ on:
default: false
required: false
type: boolean
use_split_build:
description: |
[Experimental] Build a libtorch only wheel and build pytorch such that
are built from the libtorch wheel.
required: false
type: boolean
default: false
workflow_dispatch:
inputs:
os:
Expand Down Expand Up @@ -105,6 +112,13 @@ on:
default: false
required: false
type: boolean
use_split_build:
description: |
[Experimental] Build a libtorch only wheel and build pytorch such that
are built from the libtorch wheel.
required: false
type: boolean
default: false


jobs:
Expand Down Expand Up @@ -141,6 +155,7 @@ jobs:
include-test-ops: ${{ inputs.include-test-ops }}
use-only-dl-pytorch-org: ${{ inputs.use-only-dl-pytorch-org }}
use-force-reinstall: ${{ inputs.use-force-reinstall }}
use_split_build: ${{ inputs.use_split_build }}

linux-aarch64:
if: inputs.os == 'linux-aarch64' || inputs.os == 'all'
Expand Down
2 changes: 1 addition & 1 deletion CUDA_UPGRADE_GUIDE.MD
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ There are three types of Docker containers we maintain in order to build Linux b
Add setup for our Docker `libtorch` and `manywheel`:
1. Follow this PR [PR 1003](https://github.com/pytorch/builder/pull/1003) for all steps in this section
2. For `libtorch`, the code changes are usually copy-paste. For `manywheel`, you should manually verify the versions of the shared libraries with the CUDA you downloaded before.
3. This is Manual Step: Create a ticket for PyTorch Dev Infra team to Create a new repo to host manylinux-cuda images in docker hub, for example, https://hub.docker.com/r/pytorch/manylinux-cuda115. This repo should have public visibility and read & write access for bots. This step can be removed once the following [issue](https://github.com/pytorch/builder/issues/901) is addressed.
3. This is Manual Step: Create a ticket for PyTorch Dev Infra team to Create a new repo to host manylinux-cuda images in docker hub, for example, https://hub.docker.com/r/pytorch/manylinux-builder:cuda115. This repo should have public visibility and read & write access for bots. This step can be removed once the following [issue](https://github.com/pytorch/builder/issues/901) is addressed.
4. Push the images to Docker Hub. This step should be automated with the help with GitHub Actions in the `pytorch/builder` repo. Make sure to update the `cuda_version` to the version you're adding in respective YAMLs, such as `.github/workflows/build-manywheel-images.yml`, `.github/workflows/build-conda-images.yml`, `.github/workflows/build-libtorch-images.yml`.
5. Verify that each of the workflows that push the images succeed by selecting and verifying them in the [Actions page](https://github.com/pytorch/builder/actions/workflows/build-libtorch-images.yml) of pytorch/builder. Furthermore, check [https://hub.docker.com/r/pytorch/manylinux-builder/tags](https://hub.docker.com/r/pytorch/manylinux-builder/tags), [https://hub.docker.com/r/pytorch/libtorch-cxx11-builder/tags](https://hub.docker.com/r/pytorch/libtorch-cxx11-builder/tags) to verify that the right tags exist for manylinux and libtorch types of images.
6. Finally before enabling nightly binaries and CI builds we should make sure we post following PRs in [PR 1015](https://github.com/pytorch/builder/pull/1015) [PR 1017](https://github.com/pytorch/builder/pull/1017) and [this commit](https://github.com/pytorch/builder/commit/7d5e98f1336c7cb84c772604c5e0d1acb59f2d72) to enable the new CUDA build in wheels and conda.
Expand Down
7 changes: 4 additions & 3 deletions aarch64_linux/build_aarch64_wheel.py
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ def ec2_instances_by_id(instance_id):
return rc[0] if len(rc) > 0 else None


def start_instance(key_name, ami=ubuntu18_04_ami, instance_type='t4g.2xlarge'):
def start_instance(key_name, ami=ubuntu18_04_ami, instance_type='t4g.2xlarge', ebs_size: int = 50):
inst = ec2.create_instances(ImageId=ami,
InstanceType=instance_type,
SecurityGroups=['ssh-allworld'],
Expand All @@ -65,7 +65,7 @@ def start_instance(key_name, ami=ubuntu18_04_ami, instance_type='t4g.2xlarge'):
'DeviceName': '/dev/sda1',
'Ebs': {
'DeleteOnTermination': True,
'VolumeSize': 50,
'VolumeSize': ebs_size,
'VolumeType': 'standard'
}
}
Expand Down Expand Up @@ -714,6 +714,7 @@ def parse_arguments():
parser.add_argument("--keep-running", action="store_true")
parser.add_argument("--terminate-instances", action="store_true")
parser.add_argument("--instance-type", type=str, default="t4g.2xlarge")
parser.add_argument("--ebs-size", type=int, default=50)
parser.add_argument("--branch", type=str, default="master")
parser.add_argument("--use-docker", action="store_true")
parser.add_argument("--compiler", type=str, choices=['gcc-7', 'gcc-8', 'gcc-9', 'clang'], default="gcc-8")
Expand Down Expand Up @@ -746,7 +747,7 @@ def parse_arguments():
check `~/.ssh/` folder or manually set SSH_KEY_PATH environment variable.""")

# Starting the instance
inst = start_instance(key_name, ami=ami, instance_type=args.instance_type)
inst = start_instance(key_name, ami=ami, instance_type=args.instance_type, ebs_size=args.ebs_size)
instance_name = f'{args.key_name}-{args.os}'
if args.python_version is not None:
instance_name += f'-py{args.python_version}'
Expand Down
2 changes: 1 addition & 1 deletion common/install_cpython.sh
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ PYTHON_DOWNLOAD_GITHUB_BRANCH=https://github.com/python/cpython/archive/refs/hea
GET_PIP_URL=https://bootstrap.pypa.io/get-pip.py

# Python versions to be installed in /opt/$VERSION_NO
CPYTHON_VERSIONS=${CPYTHON_VERSIONS:-"3.7.5 3.8.1 3.9.0 3.10.1 3.11.0 3.12.0 3.13.0"}
CPYTHON_VERSIONS=${CPYTHON_VERSIONS:-"3.8.1 3.9.0 3.10.1 3.11.0 3.12.0 3.13.0"}

function check_var {
if [ -z "$1" ]; then
Expand Down
8 changes: 7 additions & 1 deletion conda/build_pytorch.sh
Original file line number Diff line number Diff line change
Expand Up @@ -413,7 +413,13 @@ for py_ver in "${DESIRED_PYTHON[@]}"; do
else
local_channel="$(pwd)/$output_folder"
fi
conda install -y -c "file://$local_channel" pytorch==$PYTORCH_BUILD_VERSION -c pytorch -c numba/label/dev -c pytorch-nightly -c nvidia

CONDA_CHANNEL="pytorch-test"
if [[ -n "$OVERRIDE_PACKAGE_VERSION" && "$OVERRIDE_PACKAGE_VERSION" =~ .*dev.* ]]; then
CONDA_CHANNEL="pytorch-nightly"
fi

conda install -y -c "file://$local_channel" pytorch==$PYTORCH_BUILD_VERSION -c pytorch -c numba/label/dev -c $CONDA_CHANNEL -c nvidia

echo "$(date) :: Running tests"
pushd "$pytorch_rootdir"
Expand Down
2 changes: 1 addition & 1 deletion ffmpeg/win/Dockerfile
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# Base docker image for cross-compilling FFmpeg (LGPL) for Windows
FROM pytorch/manylinux-cuda101
FROM pytorch/manylinux-builder:cuda101
COPY . /ffmpeg-build-src
WORKDIR /ffmpeg-build-src

Expand Down
2 changes: 1 addition & 1 deletion manywheel/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -117,7 +117,7 @@ ENV SSL_CERT_FILE=/opt/_internal/certs.pem
COPY --from=openssl /opt/openssl /opt/openssl
COPY --from=python /opt/python /opt/python
COPY --from=python /opt/_internal /opt/_internal
COPY --from=python /opt/python/cp37-cp37m/bin/auditwheel /usr/local/bin/auditwheel
COPY --from=python /opt/python/cp39-cp39/bin/auditwheel /usr/local/bin/auditwheel
COPY --from=intel /opt/intel /opt/intel
COPY --from=patchelf /usr/local/bin/patchelf /usr/local/bin/patchelf
COPY --from=jni /usr/local/include/jni.h /usr/local/include/jni.h
Expand Down
2 changes: 1 addition & 1 deletion manywheel/Dockerfile_s390x
Original file line number Diff line number Diff line change
Expand Up @@ -69,5 +69,5 @@ RUN bash build_scripts/build.sh && rm -r build_scripts
FROM openssl as final
COPY --from=python /opt/python /opt/python
COPY --from=python /opt/_internal /opt/_internal
COPY --from=python /opt/python/cp37-cp37m/bin/auditwheel /usr/local/bin/auditwheel
COPY --from=python /opt/python/cp39-cp39/bin/auditwheel /usr/local/bin/auditwheel
COPY --from=patchelf /usr/local/bin/patchelf /usr/local/bin/patchelf
44 changes: 40 additions & 4 deletions manywheel/build_common.sh
Original file line number Diff line number Diff line change
Expand Up @@ -45,8 +45,14 @@ fi
if [[ -z "$TORCH_PACKAGE_NAME" ]]; then
TORCH_PACKAGE_NAME='torch'
fi

if [[ -z "$TORCH_NO_PYTHON_PACKAGE_NAME" ]]; then
TORCH_NO_PYTHON_PACKAGE_NAME='torch_no_python'
fi

TORCH_PACKAGE_NAME="$(echo $TORCH_PACKAGE_NAME | tr '-' '_')"
echo "Expecting the built wheels to all be called '$TORCH_PACKAGE_NAME'"
TORCH_NO_PYTHON_PACKAGE_NAME="$(echo $TORCH_NO_PYTHON_PACKAGE_NAME | tr '-' '_')"
echo "Expecting the built wheels to all be called '$TORCH_PACKAGE_NAME' or '$TORCH_NO_PYTHON_PACKAGE_NAME'"

# Version: setup.py uses $PYTORCH_BUILD_VERSION.post$PYTORCH_BUILD_NUMBER if
# PYTORCH_BUILD_NUMBER > 1
Expand Down Expand Up @@ -160,11 +166,29 @@ else
fi

echo "Calling setup.py bdist at $(date)"
time CMAKE_ARGS=${CMAKE_ARGS[@]} \
EXTRA_CAFFE2_CMAKE_FLAGS=${EXTRA_CAFFE2_CMAKE_FLAGS[@]} \

if [[ "$USE_SPLIT_BUILD" == "true" ]]; then
echo "Calling setup.py bdist_wheel for split build (BUILD_LIBTORCH_WHL)"
time EXTRA_CAFFE2_CMAKE_FLAGS=${EXTRA_CAFFE2_CMAKE_FLAGS[@]} \
BUILD_LIBTORCH_WHL=1 BUILD_PYTHON_ONLY=0 \
BUILD_LIBTORCH_CPU_WITH_DEBUG=$BUILD_DEBUG_INFO \
USE_NCCL=${USE_NCCL} USE_RCCL=${USE_RCCL} USE_KINETO=${USE_KINETO} \
python setup.py bdist_wheel -d /tmp/$WHEELHOUSE_DIR
echo "Finished setup.py bdist_wheel for split build (BUILD_LIBTORCH_WHL)"
echo "Calling setup.py bdist_wheel for split build (BUILD_PYTHON_ONLY)"
time EXTRA_CAFFE2_CMAKE_FLAGS=${EXTRA_CAFFE2_CMAKE_FLAGS[@]} \
BUILD_LIBTORCH_WHL=0 BUILD_PYTHON_ONLY=1 \
BUILD_LIBTORCH_CPU_WITH_DEBUG=$BUILD_DEBUG_INFO \
USE_NCCL=${USE_NCCL} USE_RCCL=${USE_RCCL} USE_KINETO=${USE_KINETO} \
python setup.py bdist_wheel -d /tmp/$WHEELHOUSE_DIR --cmake
echo "Finished setup.py bdist_wheel for split build (BUILD_PYTHON_ONLY)"
else
time CMAKE_ARGS=${CMAKE_ARGS[@]} \
EXTRA_CAFFE2_CMAKE_FLAGS=${EXTRA_CAFFE2_CMAKE_FLAGS[@]} \
BUILD_LIBTORCH_CPU_WITH_DEBUG=$BUILD_DEBUG_INFO \
USE_NCCL=${USE_NCCL} USE_RCCL=${USE_RCCL} USE_KINETO=${USE_KINETO} \
python setup.py bdist_wheel -d /tmp/$WHEELHOUSE_DIR
fi
echo "Finished setup.py bdist at $(date)"

# Build libtorch packages
Expand Down Expand Up @@ -282,6 +306,8 @@ echo 'Built this wheel:'
ls /tmp/$WHEELHOUSE_DIR
mkdir -p "/$WHEELHOUSE_DIR"
mv /tmp/$WHEELHOUSE_DIR/torch*linux*.whl /$WHEELHOUSE_DIR/
mv /tmp/$WHEELHOUSE_DIR/torch_no_python*.whl /$WHEELHOUSE_DIR/ || true

if [[ -n "$BUILD_PYTHONLESS" ]]; then
mkdir -p /$LIBTORCH_HOUSE_DIR
mv /tmp/$LIBTORCH_HOUSE_DIR/*.zip /$LIBTORCH_HOUSE_DIR
Expand All @@ -292,7 +318,7 @@ rm -rf /tmp_dir
mkdir /tmp_dir
pushd /tmp_dir

for pkg in /$WHEELHOUSE_DIR/torch*linux*.whl /$LIBTORCH_HOUSE_DIR/libtorch*.zip; do
for pkg in /$WHEELHOUSE_DIR/torch*linux*.whl /$WHEELHOUSE_DIR/torch_no_python*.whl /$LIBTORCH_HOUSE_DIR/libtorch*.zip; do

# if the glob didn't match anything
if [[ ! -e $pkg ]]; then
Expand Down Expand Up @@ -408,6 +434,14 @@ for pkg in /$WHEELHOUSE_DIR/torch*linux*.whl /$LIBTORCH_HOUSE_DIR/libtorch*.zip;
popd
fi

# @sahanp todo: Remove this line
echo "current files in directory"
ls -l

echo "removing extraneous .so and .a files"
# todo @PaliC: Remove these .so and .a files before hand in the split build
rm *.so *.a || true

# zip up the wheel back
zip -rq $(basename $pkg) $PREIX*

Expand Down Expand Up @@ -441,7 +475,9 @@ if [[ -z "$BUILD_PYTHONLESS" ]]; then
pushd $PYTORCH_ROOT/test

# Install the wheel for this Python version
pip uninstall -y "$TORCH_NO_PYTHON_PACKAGE_NAME" || true
pip uninstall -y "$TORCH_PACKAGE_NAME"
pip install "$TORCH_NO_PYTHON_PACKAGE_NAME" --no-index -f /$WHEELHOUSE_DIR --no-dependencies -v || true
pip install "$TORCH_PACKAGE_NAME" --no-index -f /$WHEELHOUSE_DIR --no-dependencies -v

# Print info on the libraries installed in this wheel
Expand Down
17 changes: 0 additions & 17 deletions manywheel/build_docker.sh
Original file line number Diff line number Diff line change
Expand Up @@ -16,70 +16,61 @@ case ${GPU_ARCH_TYPE} in
cpu)
TARGET=cpu_final
DOCKER_TAG=cpu
LEGACY_DOCKER_IMAGE=${DOCKER_REGISTRY}/pytorch/manylinux-cpu
GPU_IMAGE=centos:7
DOCKER_GPU_BUILD_ARG=" --build-arg DEVTOOLSET_VERSION=9"
;;
cpu-manylinux_2_28)
TARGET=cpu_final
DOCKER_TAG=cpu
LEGACY_DOCKER_IMAGE=${DOCKER_REGISTRY}/pytorch/manylinux_2_28-cpu
GPU_IMAGE=amd64/almalinux:8
DOCKER_GPU_BUILD_ARG=" --build-arg DEVTOOLSET_VERSION=11"
MANY_LINUX_VERSION="2_28"
;;
cpu-aarch64)
TARGET=final
DOCKER_TAG=cpu-aarch64
LEGACY_DOCKER_IMAGE=${DOCKER_REGISTRY}/pytorch/manylinux-cpu-aarch64
GPU_IMAGE=arm64v8/centos:7
DOCKER_GPU_BUILD_ARG=" --build-arg DEVTOOLSET_VERSION=10"
MANY_LINUX_VERSION="aarch64"
;;
cpu-aarch64-2_28)
TARGET=final
DOCKER_TAG=cpu-aarch64
LEGACY_DOCKER_IMAGE=${DOCKER_REGISTRY}/pytorch/manylinux_2_28-cpu-aarch64
GPU_IMAGE=arm64v8/almalinux:8
DOCKER_GPU_BUILD_ARG=" --build-arg DEVTOOLSET_VERSION=11"
MANY_LINUX_VERSION="2_28_aarch64"
;;
cpu-cxx11-abi)
TARGET=final
DOCKER_TAG=cpu-cxx11-abi
LEGACY_DOCKER_IMAGE=${DOCKER_REGISTRY}/pytorch/manylinux-cpu-cxx11-abi
GPU_IMAGE=""
DOCKER_GPU_BUILD_ARG=" --build-arg DEVTOOLSET_VERSION=9"
MANY_LINUX_VERSION="cxx11-abi"
;;
cpu-s390x)
TARGET=final
DOCKER_TAG=cpu-s390x
LEGACY_DOCKER_IMAGE=${DOCKER_REGISTRY}/pytorch/manylinux-cpu-s390x
GPU_IMAGE=redhat/ubi9
DOCKER_GPU_BUILD_ARG=""
MANY_LINUX_VERSION="s390x"
;;
cuda)
TARGET=cuda_final
DOCKER_TAG=cuda${GPU_ARCH_VERSION}
LEGACY_DOCKER_IMAGE=${DOCKER_REGISTRY}/pytorch/manylinux-cuda${GPU_ARCH_VERSION//./}
# Keep this up to date with the minimum version of CUDA we currently support
GPU_IMAGE=centos:7
DOCKER_GPU_BUILD_ARG="--build-arg BASE_CUDA_VERSION=${GPU_ARCH_VERSION} --build-arg DEVTOOLSET_VERSION=9"
;;
cuda-manylinux_2_28)
TARGET=cuda_final
DOCKER_TAG=cuda${GPU_ARCH_VERSION}
LEGACY_DOCKER_IMAGE=${DOCKER_REGISTRY}/pytorch/manylinux_2_28-cuda${GPU_ARCH_VERSION//./}
GPU_IMAGE=amd64/almalinux:8
DOCKER_GPU_BUILD_ARG="--build-arg BASE_CUDA_VERSION=${GPU_ARCH_VERSION} --build-arg DEVTOOLSET_VERSION=11"
MANY_LINUX_VERSION="2_28"
;;
cuda-aarch64)
TARGET=cuda_final
DOCKER_TAG=cuda${GPU_ARCH_VERSION}
LEGACY_DOCKER_IMAGE=''
GPU_IMAGE=arm64v8/centos:7
DOCKER_GPU_BUILD_ARG="--build-arg BASE_CUDA_VERSION=${GPU_ARCH_VERSION} --build-arg DEVTOOLSET_VERSION=11"
MANY_LINUX_VERSION="aarch64"
Expand All @@ -88,7 +79,6 @@ case ${GPU_ARCH_TYPE} in
rocm)
TARGET=rocm_final
DOCKER_TAG=rocm${GPU_ARCH_VERSION}
LEGACY_DOCKER_IMAGE=${DOCKER_REGISTRY}/pytorch/manylinux-rocm:${GPU_ARCH_VERSION}
GPU_IMAGE=rocm/dev-centos-7:${GPU_ARCH_VERSION}-complete
PYTORCH_ROCM_ARCH="gfx900;gfx906;gfx908;gfx90a;gfx1030;gfx1100"
ROCM_REGEX="([0-9]+)\.([0-9]+)[\.]?([0-9]*)"
Expand All @@ -114,7 +104,6 @@ DOCKER_NAME=manylinux${MANY_LINUX_VERSION}
DOCKER_IMAGE=${DOCKER_REGISTRY}/pytorch/${DOCKER_NAME}-builder:${DOCKER_TAG}
if [[ -n ${MANY_LINUX_VERSION} && -z ${DOCKERFILE_SUFFIX} ]]; then
DOCKERFILE_SUFFIX=_${MANY_LINUX_VERSION}
LEGACY_DOCKER_IMAGE=''
fi
(
set -x
Expand All @@ -135,9 +124,6 @@ DOCKER_IMAGE_SHA_TAG=${DOCKER_IMAGE}-${GIT_COMMIT_SHA}

(
set -x
if [[ -n ${LEGACY_DOCKER_IMAGE} ]]; then
docker tag ${DOCKER_IMAGE} ${LEGACY_DOCKER_IMAGE}
fi
if [[ -n ${GITHUB_REF} ]]; then
docker tag ${DOCKER_IMAGE} ${DOCKER_IMAGE_BRANCH_TAG}
docker tag ${DOCKER_IMAGE} ${DOCKER_IMAGE_SHA_TAG}
Expand All @@ -148,9 +134,6 @@ if [[ "${WITH_PUSH}" == true ]]; then
(
set -x
docker push "${DOCKER_IMAGE}"
if [[ -n ${LEGACY_DOCKER_IMAGE} ]]; then
docker push "${LEGACY_DOCKER_IMAGE}"
fi
if [[ -n ${GITHUB_REF} ]]; then
docker push "${DOCKER_IMAGE_BRANCH_TAG}"
docker push "${DOCKER_IMAGE_SHA_TAG}"
Expand Down
12 changes: 3 additions & 9 deletions manywheel/build_libtorch.sh
Original file line number Diff line number Diff line change
Expand Up @@ -69,12 +69,8 @@ fi
# ever pass one python version, so we assume that DESIRED_PYTHON is not a list
# in this case
if [[ -n "$DESIRED_PYTHON" && "$DESIRED_PYTHON" != cp* ]]; then
if [[ "$DESIRED_PYTHON" == '3.7' ]]; then
DESIRED_PYTHON='cp37-cp37m'
else
python_nodot="$(echo $DESIRED_PYTHON | tr -d m.u)"
DESIRED_PYTHON="cp${python_nodot}-cp${python_nodot}"
fi
python_nodot="$(echo $DESIRED_PYTHON | tr -d m.u)"
DESIRED_PYTHON="cp${python_nodot}-cp${python_nodot}"
fi
pydir="/opt/python/$DESIRED_PYTHON"
export PATH="$pydir/bin:$PATH"
Expand All @@ -97,9 +93,7 @@ fi
pushd "$PYTORCH_ROOT"
python setup.py clean
retry pip install -qr requirements.txt
if [[ "$DESIRED_PYTHON" == "cp37-cp37m" ]]; then
retry pip install -q numpy==1.15
elif [[ "$DESIRED_PYTHON" == "cp38-cp38" ]]; then
if [[ "$DESIRED_PYTHON" == "cp38-cp38" ]]; then
retry pip install -q numpy==1.15
else
retry pip install -q numpy==1.11
Expand Down
10 changes: 5 additions & 5 deletions manywheel/build_scripts/build.sh
Original file line number Diff line number Diff line change
Expand Up @@ -58,14 +58,14 @@ autoconf --version
build_openssl $OPENSSL_ROOT $OPENSSL_HASH
/build_scripts/install_cpython.sh

PY37_BIN=/opt/python/cp37-cp37m/bin
PY39_BIN=/opt/python/cp39-cp39/bin

# Our openssl doesn't know how to find the system CA trust store
# (https://github.com/pypa/manylinux/issues/53)
# And it's not clear how up-to-date that is anyway
# So let's just use the same one pip and everyone uses
$PY37_BIN/pip install certifi
ln -s $($PY37_BIN/python -c 'import certifi; print(certifi.where())') \
$PY39_BIN/pip install certifi
ln -s $($PY39_BIN/python -c 'import certifi; print(certifi.where())') \
/opt/_internal/certs.pem
# If you modify this line you also have to modify the versions in the
# Dockerfiles:
Expand All @@ -86,8 +86,8 @@ tar -xzf patchelf-0.10.tar.gz
rm -rf patchelf-0.10.tar.gz patchelf-0.10

# Install latest pypi release of auditwheel
$PY37_BIN/pip install auditwheel
ln -s $PY37_BIN/auditwheel /usr/local/bin/auditwheel
$PY39_BIN/pip install auditwheel
ln -s $PY39_BIN/auditwheel /usr/local/bin/auditwheel

# Clean up development headers and other unnecessary stuff for
# final image
Expand Down
Loading
Loading