chore(deps): update dependency torchvision to v0.20.1 #27
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR contains the following updates:
==0.16.2
->==0.20.1
Release Notes
pytorch/vision (torchvision)
v0.20.1
Compare Source
v0.20.0
: Torchvision 0.20 releaseCompare Source
Highlights
Encoding / Decoding images
Torchvision is further extending its encoding/decoding capabilities. For this version, we added a WEBP decoder, and a batch JPEG decoder on CUDA GPUs, which can lead to 10X speed-ups over CPU decoding.
We have also improved the UX of our decoding APIs to be more user-friendly. The main entry point is now
torchvision.io.decode_image()
, and it can take as input either a path (as str orpathlib.Path
), or a tensor containing the raw encoded data.Read more on the docs!
We also added support for HEIC and AVIF decoding, but these are currently only available when building from source. We are working on making those available directly in the upcoming releases. Stay tuned!
Detailed changes
Bug Fixes
[datasets] Update URL of SBDataset train_noval (#8551)
[datasets] EuroSAT: fix SSL certificate issues (#8563)
[io] Check average_rate availability in video reader (#8548)
New Features
[io] Add batch JPEG GPU decoding (
decode_jpeg()
) (#8496)[io] Add WEBP image decoder:
decode_image()
,decode_webp()
(#8527, #8612, #8610)[io] Add HEIC and AVIF decoders, only available when building from source (#8597, #8596, #8647, #8613, #8621)
Improvements
[io] Add support for decoding 16bits png (#8524)
[io] Allow decoding functions to accept the mode parameter as a string (#8627)
[io] Allow
decode_image()
to support paths (#8624)[io] Automatically send video to CPU in io.write_video (#8537)
[datasets] Better progress bar for file downloading (#8556)
[datasets] Add Path type annotation for ImageFolder (#8526)
[ops] Register nms and roi_align Autocast policy for PyTorch Intel GPU backend (#8541)
[transforms] Use Sequence for parameters type checking in
transforms.RandomErase
(#8615)[transforms] Support
v2.functional.gaussian_blur
backprop (#8486)[transforms] Expose
transforms.v2
utils for writing custom transforms. (#8670)[utils] Fix f-string in color error message (#8639)
[packaging] Revamped and improved debuggability of setup.py build (#8535, #8581, #8581, #8582, #8590, #8533, #8528, #8659)
[Documentation] Various documentation improvements (#8605, #8611, #8506, #8507, #8539, #8512, #8513, #8583, #8633)
[tests] Various tests improvements (#8580, #8553, #8523, #8617, #8518, #8579, #8558, #8617, #8641)
[code quality] Various code quality improvements (#8552, #8555, #8516, #8526, #8602, #8615, #8639, #8532)
[ci] #8562, #8644, #8592, #8542, #8594, #8530, #8656
Contributors
We're grateful for our community, which helps us improve torchvision by submitting issues and PRs, and providing feedback and suggestions. The following persons have contributed patches for this release:
Adam J. Stewart, AJS Payne, Andreas Floros, Andrey Talman, Bhavay Malhotra, Brizar, deekay42, Ehsan, Feng Yuan, Joseph Macaranas, Martin, Masahiro Hiramori, Nicolas Hug, Nikita Shulga , Sergii Dymchenko, Stefan Baumann, venkatram-dev, Wang, Chuanqi
v0.19.1
: TorchVision 0.19.1 ReleaseCompare Source
This is a patch release, which is compatible with PyTorch 2.4.1. There are no new features added.
v0.19.0
: Torchvision 0.19 releaseCompare Source
Highlights
Encoding / Decoding images
Torchvision is extending its encoding/decoding capabilities. For this version, we added a GIF decoder which is available as
torchvision.io.decode_gif(raw_tensor)
,torchvision.io.decode_image(raw_tensor)
, andtorchvision.io.read_image(path_to_image)
.We also added support for jpeg GPU encoding in
torchvision.io.encode_jpeg()
. This is 10X faster than the existing CPU jpeg encoder.Read more on the docs!
Stay tuned for more improvements coming in the next versions. We plan to improve jpeg GPU decoding, and add more image decoders (webp in particular).
Resizing according to the longest edge of an image
It is now possible to resize images by setting
torchvision.transforms.v2.Resize(max_size=N)
: this will resize the longest edge of the image exactly tomax_size
, making sure the image dimension don't exceed this value. Read more on the docs!Detailed changes
Bug Fixes
[datasets]
SBDataset
: Only download noval file when image_set='train_noval' (#8475)[datasets] Update the download url in class
EMNIST
(#8350)[io] Fix compilation error when there is no
libjpeg
(#8342)[reference scripts] Fix use of
cutmix_alpha
in classification training references (#8448)[utils] Allow
K=1
indraw_keypoints
(#8439)New Features
[io] Add decoder for GIF images (
decode_gif()
,decode_image()
,read_image()
) (#8406, #8419)[transforms] Add
GaussianNoise
transform (#8381)Improvements
[transforms] Allow v2
Resize
to resize longer edge exactly tomax_size
(#8459)[transforms] Add
min_area
parameter toSanitizeBoundingBox
(#7735)[transforms] Make
adjust_hue()
work withnumpy 2.0
(#8463)[transforms] Enable one-hot-encoded labels in
MixUp
andCutMix
(#8427)[transforms] Create kernel on-device for
transforms.functional.gaussian_blur
(#8426)[io] Adding GPU acceleration to
encode_jpeg
(10X faster than CPU encoder) (#8391)[io]
read_video
: acceptBytesIO
objects onpyav
backend (#8442)[io] Add compatibility with FFMPEG 7.0 (#8408)
[datasets] Add extra to install
gdown
(#8430)[datasets] Support encoded
RLE
format in forCOCO
segmentations (#8387)[datasets] Added binary cat vs dog classification target type to Oxford pet dataset (#8388)
[datasets] Return labels for
FER2013
if possible (#8452)[ops] Force use of
torch.compile
on deterministicroi_align
implementation (#8436)[utils] add float support to
utils.draw_bounding_boxes()
(#8328)[feature_extraction] Add concrete_args to feature extraction tracing. (#8393)
[Docs] Various documentation improvements (#8429, #8467, #8469, #8332, #8262, #8341, #8392, #8386, #8385, #8411).
[Tests] Various testing improvements (#8454, #8418, #8480, #8455)
[Code quality] Various code quality improvements (#8404, #8402, #8345, #8335, #8481, #8334, #8384, #8451, #8470, #8413, #8414, #8416, #8412)
Contributors
We're grateful for our community, which helps us improve torchvision by submitting issues and PRs, and providing feedback and suggestions. The following persons have contributed patches for this release:
Adam J. Stewart ahmadsharif1, AJS Payne, Andrew Lingg, Andrey Talman, Anner, Antoine Broyelle, cdzhan, deekay42, drhead, Edward Z. Yang, Emin Orhan, Fangjun Kuang, G, haarisr, Huy Do, Jack Newsom, JavaZero, Mahdi Lamb, Mantas, Nicolas Hug, Nicolas Hug , nihui, Richard Barnes , Richard Zou, Richie Bendall, Robert-André Mauchin, Ross Wightman, Siddarth Ijju, vfdev
v0.18.1
: TorchVision 0.18.1 ReleaseCompare Source
This is a patch release, which is compatible with PyTorch 2.3.1. There are no new features added.
v0.18.0
: TorchVision 0.18 ReleaseCompare Source
BC-Breaking changes
[datasets]
gdown
is now a required dependency for downloading datasets that are on Google Drive. This change was actually introduced in0.17.1
(repeated here for visibility) (#8237)[datasets] The
StanfordCars
dataset isn’t available for download anymore. Please follow these instructions to manually download it (#8309, #8324)[transforms]
to_grayscale
and corresponding transform now always return 3 channels whennum_output_channels=3
(#8229)Bug Fixes
[datasets] Fix download URL of
EMNIST
dataset (#8350)[datasets] Fix root path expansion in
Kitti
dataset (#8164)[models] Fix default momentum value of
BatchNorm2d
inMaxViT
from 0.99 to 0.01 (#8312)[reference scripts] Fix CutMix and MixUp arguments (#8287)
[MPS, build] Link essential libraries in cmake (#8230)
[build] Fix build with ffmpeg 6.0 (#8096)
New Features
[transforms] New GrayscaleToRgb transform (#8247)
[transforms] New JPEG augmentation transform (#8316)
Improvements
[datasets, io] Added
pathlib.Path
support to datasets and io utilities. (#8196, #8200, #8314, #8321)[datasets] Added
allow_empty
parameter toImageFolder
and related utils to support empty classes during image discovery (#8311)[datasets] Raise proper error in
CocoDetection
when a slice is passed (#8227)[io] Added support for EXIF orientation in JPEG and PNG decoders (#8303, #8279, #8342, #8302)
[io] Avoiding unnecessary copies on
io.VideoReader
withpyav
backend (#8173)[transforms] Allow
SanitizeBoundingBoxes
to sanitize more than labels (#8319)[transforms] Add
sanitize_bounding_boxes
kernel/functional (#8308)[transforms] Make
perspective
more numerically stable (#8249)[transforms] Allow 2D numpy arrays as inputs for
to_image
(#8256)[transforms] Speed-up
rotate
for 90, 180, 270 degrees (#8295)[transforms] Enabled torch compile on
affine
transform (#8218)[transforms] Avoid some graph breaks in transforms (#8171)
[utils] Add float support to
draw_keypoints
(#8276)[utils] Add
visibility
parameter todraw_keypoints
(#8225)[utils] Add float support to
draw_segmentation_masks
(#8150)[utils] Better show overlap section of masks in
draw_segmentation_masks
(#8213)[Docs] Various documentation improvements (#8341, #8332, #8198, #8318, #8202, #8246, #8208, #8231, #8300, #8197)
[code quality] Various code quality improvements (#8273, #8335, #8234, #8345, #8334, #8119, #8251, #8329, #8217, #8180, #8105, #8280, #8161, #8313)
Contributors
We're grateful for our community, which helps us improve torchvision by submitting issues and PRs, and providing feedback and suggestions. The following persons have contributed patches for this release:
Adam Dangoor Ahmad Sharif , ahmadsharif1, Andrey Talman, Anner, anthony-cabacungan, Arun Sathiya, Brizar, Brizar , cdzhan, Danylo Baibak, Huy Do, Ivan Magazinnik, JavaZero, Johan Edstedt, Li-Huai (Allan) Lin, Mantas, Mark Harfouche, Mithra, Nicolas Hug, Nicolas Hug , nihui, Philip Meier, Philip Meier , RazaProdigy , Richard Barnes , Riza Velioglu, sam-watts, Santiago Castro, Sergii Dymchenko, Syed Raza, talcs, Thien Tran, Thien Tran , TilmannR, Tobias Fischer, vfdev, vfdev , Zhu Lin Ch'ng, Zoltán Böszörményi.
v0.17.2
: TorchVision 0.17.2 ReleaseCompare Source
This is a patch release, which is compatible with PyTorch 2.2.2. There are no new features added.
v0.17.1
: TorchVision 0.17.1 ReleaseCompare Source
This is a patch release, which is compatible with PyTorch 2.2.1.
Bug Fixes
gdown
dependency to support downloading datasets from Google Drive (https://github.com/pytorch/vision/pull/8237)convert_bounding_box_format
when passing string parameters (https://github.com/pytorch/vision/issues/8258)v0.17.0
: TorchVision 0.17 ReleaseCompare Source
Highlights
The V2 transforms are now stable!
The
torchvision.transforms.v2
namespace was still in BETA stage until now. It is now stable! Whether you’re new to Torchvision transforms, or you’re already experienced with them, we encourage you to start with Getting started with transforms v2 in order to learn more about what can be done with the new v2 transforms.Browse our main docs for general information and performance tips. The available transforms and functionals are listed in the API reference. Additional information and tutorials can also be found in our example gallery, e.g. Transforms v2: End-to-end object detection/segmentation example or How to write your own v2 transforms.
Towards
torch.compile()
supportWe are progressively adding support for
torch.compile()
to torchvision interfaces, reducing graph breaks and allowing dynamic shape.The torchvision ops (
nms
,[ps_]roi_align
,[ps_]roi_pool
anddeform_conv_2d
) are now compatible withtorch.compile
and dynamic shapes.On the transforms side, the majority of low-level kernels (like
resize_image()
orcrop_image()
) should compile properly without graph breaks and with dynamic shapes. We are still addressing the remaining edge-cases, moving up towards full functional support and classes, and you should expect more progress on that front with the next release.Detailed Changes
Breaking changes / Finalizing deprecations
antialias
parameter from None to True, in all transforms that perform resizing. This change of default has been communicated in previous versions, and should drastically reduce the amount of bugs/surprises as it aligns the tensor backend with the PIL backend. Simply put: from now on, antialias is always applied when resizing (with bilinear or bicubic modes), whether you're using tensors or PIL images. This change only affects the tensor backend, as PIL always applies antialias anyway. (#7949)torchvision.transforms.functional_tensor.py
andtorchvision.transforms.functional_pil.py
modules, as these had been deprecated for a while. Use the public functionals fromtorchvision.transforms.v2.functional
instead. (#7953)to_pil_image
now provides the same output for equivalent numpy arrays and tensor inputs (#8097)Bug Fixes
[datasets] Fix root path expansion in datasets.Kitti (#8165)
[transforms] allow sequence fill for v2 AA scripted (#7919)
[reference scripts] Fix quantized references (#8073)
[reference scripts] Fix IoUs reported in segmentation references (#7916)
New Features
[datasets] add Imagenette dataset (#8139)
Improvements
[transforms] The v2 transforms are now officially stable and out of BETA stage (#8111)
[ops] The ops (
[ps_]roi_align
,ps_[roi_pool]
,deform_conv_2d
) are now compatible withtorch.compile
and dynamic shapes (#8061, #8049, #8062, #8063, #7942, #7944)[models] Allow custom
atrous_rates
for deeplabv3_mobilenet_v3_large (#8019)[transforms] allow float fill for integer images in F.pad (#7950)
[transforms] allow len 1 sequences for fill with PIL (#7928)
[transforms] allow size to be generic Sequence in Resize (#7999)
[transforms] Making root parameter optional for Vision Dataset (#8124)
[transforms] Added support for tv tensors in torch compile for func ops (#8110)
[transforms] Reduced number of graphs for compiled resize (#8108)
[misc] Various fixes for S390x support (#8149)
[Docs] Various Documentation enhancements (#8007, #8014, #7940, #7989, #7993, #8114, #8117, #8121, #7978, #8002, #7957, #7907, #8000, #7963)
[Tests] Various test enhancements (#8032, #7927, #7933, #7934, #7935, #7939, #7946, #7943, #7968, #7967, #8033, #7975, #7954, #8001, #7962, #8003, #8011, #8012, #8013, #8023, #7973, #7970, #7976, #8037, #8052, #7982, #8145, #8148, #8144, #8058, #8057, #7961, #8132, #8133, #8160)
[Code Quality] (#8077, #8070, #8004, #8113,
Contributors
We're grateful for our community, which helps us improve torchvision by submitting issues and PRs, and providing feedback and suggestions. The following persons have contributed patches for this release:
Aleksei Nikiforov. Alex Wei, Andrey Talman, Chunyuan WU, CptCaptain, Edward Z. Yang, Gu Wang, Haochen Yu, Huy Do, Jeff Daily, Josh Levy-Kramer, moto, Nicolas Hug, NVS Abhilash, Omkar Salpekar, Philip Meier, Sergii Dymchenko, Siddharth Singh, Thiago Crepaldi, Thomas Fritz, TilmannR, vfdev-5, Zeeshan Khan Suri.
Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
This PR was generated by Mend Renovate. View the repository job log.