Releases: albumentations-team/albumentations
Albumentations 1.4.21 Release Notes
- Support Our Work
- Transforms
- Core
- Benchmark
- Speedups
Support Our Work
- Love the library? You can contribute to its development by becoming a sponsor for the library. Your support is invaluable, and every contribution makes a difference.
- Haven't starred our repo yet? Show your support with a ⭐! It's just only one mouse click away.
- Got ideas or facing issues? We'd love to hear from you. Share your thoughts in our issues or join the conversation on our Discord server
Transforms
Auto padding in crops
Added option to pad the image if crop size is larger than the crop size
Old way
[
A.PadIfNeeded(min_height=1024, min_width=1024, p=1),
A.RandomCrop(height=1204, width=1024, p=1)
]
New way:
A.RandomCrop(height=1204, width=1024, p=1, pad_if_needed=True)
Works for:
You may also use it to pad image to a desired size.
Core
Random state
Now random state for the pipeline does not depend on the global random state
Before
random.seed(seed)
np.random.seed(seed)
transform = A.Compose(...)
Now
transform = A.Compose(seed=seed, ...)
or
transform = A.Compose(...)
transform.set_random_seed(seed)
Saving used parameters
Now you can get exact parameters that were used in the pipeline on a given sample with
transform = A.Compose(save_applied_params=True, ...)
result = transform(image=image, bboxes=bboxes, mask=mask, keypoints=keypoints)
print(result["applied_transforms"])
Benchmark
Moved benchmark to a separate repo
https://github.com/albumentations-team/benchmark/
Current result for uint8 images:
Transform | albumentations 1.4.20 |
augly 1.0.0 |
imgaug 0.4.0 |
kornia 0.7.3 |
torchvision 0.20.0 |
---|---|---|---|---|---|
HorizontalFlip | 8325 ± 955 | 4807 ± 818 | 6042 ± 788 | 390 ± 106 | 914 ± 67 |
VerticalFlip | 20493 ± 1134 | 9153 ± 1291 | 10931 ± 1844 | 1212 ± 402 | 3198 ± 200 |
Rotate | 1272 ± 12 | 1119 ± 41 | 1136 ± 218 | 143 ± 11 | 181 ± 11 |
Affine | 967 ± 3 | - | 774 ± 97 | 147 ± 9 | 130 ± 12 |
Equalize | 961 ± 4 | - | 581 ± 54 | 152 ± 19 | 479 ± 12 |
RandomCrop80 | 118946 ± 741 | 25272 ± 1822 | 11503 ± 441 | 1510 ± 230 | 32109 ± 1241 |
ShiftRGB | 1873 ± 252 | - | 1582 ± 65 | - | - |
Resize | 2365 ± 153 | 611 ± 78 | 1806 ± 63 | 232 ± 24 | 195 ± 4 |
RandomGamma | 8608 ± 220 | - | 2318 ± 269 | 108 ± 13 | - |
Grayscale | 3050 ± 597 | 2720 ± 932 | 1681 ± 156 | 289 ± 75 | 1838 ± 130 |
RandomPerspective | 410 ± 20 | - | 554 ± 22 | 86 ± 11 | 96 ± 5 |
GaussianBlur | 1734 ± 204 | 242 ± 4 | 1090 ± 65 | 176 ± 18 | 79 ± 3 |
MedianBlur | 862 ± 30 | - | 813 ± 30 | 5 ± 0 | - |
MotionBlur | 2975 ± 52 | - | 612 ± 18 | 73 ± 2 | - |
Posterize | 5214 ± 101 | - | 2097 ± 68 | 430 ± 49 | 3196 ± 185 |
JpegCompression | 845 ± 61 | 778 ± 5 | 459 ± 35 | 71 ± 3 | 625 ± 17 |
GaussianNoise | 147 ± 10 | 67 ± 2 | 206 ± 11 | 75 ± 1 | - |
Elastic | 171 ± 15 | - | 235 ± 20 | 1 ± 0 | 2 ± 0 |
Clahe | 423 ± 10 | - | 335 ± 43 | 94 ± 9 | - |
CoarseDropout | 11288 ± 609 | - | 671 ± 38 | 536 ± 87 | - |
Blur | 4816 ± 59 | 246 ± 3 | 3807 ± 325 | - | - |
ColorJitter | 536 ± 41 | 255 ± 13 | - | 55 ± 18 | 46 ± 2 |
Brightness | 4443 ± 84 | 1163 ± 86 | - | 472 ± 101 | 429 ± 20 |
Contrast | 4398 ± 143 | 736 ± 79 | - | 425 ± 52 | 335 ± 35 |
RandomResizedCrop | 2952 ± 24 | - | - | 287 ± 58 | 511 ± 10 |
Normalize | 1016 ± 84 | - | - | 626 ± 40 | 519 ± 12 |
PlankianJitter | 1844 ± 208 | - | - | 813 ± 211 | - |
Speedups
- Speedup in PlankianJitter in uint8 mode
- Replaced
cv2.addWeighted
withwsum
from simsimd package
Albumentations 1.4.20 Release Notes
Hotfix version.
- Fix in check_version
- Fix in PieceWiseAffine
- Fix in RandomSizedCrop and RandomResizedCrop
- Fix in
RandomOrder
Albumentations 1.4.19 Release Notes
- Support Our Work
- Transforms
- Core
- Bug Fixes
Support Our Work
- Love the library? You can contribute to its development by becoming a sponsor for the library. Your support is invaluable, and every contribution makes a difference.
- Haven't starred our repo yet? Show your support with a ⭐! It's just only one mouse click away.
- Got ideas or facing issues? We'd love to hear from you. Share your thoughts in our issues or join the conversation on our Discord server
Transforms
Added mask_interpolation
to all transforms that use mask interpolation, including:
- RandomSizedCrop
- RandomResizedCrop
- RandomSizedBBoxSafeCrop
- CropAndPad
- Resize
- RandomScale
- LongestMaxSize
- SmallestMaxSize
- Rotate
- SafeRotate
- OpticalDistortion
- GridDistortion
- ElasticTransform
- Perspective
- PiecewiseAffine
by @ternaus
Core
- Minimal supported python version is 3.9
- Removed dependency on scikit-image
- Updated Random number generator from np.random.state to np.random.generator. Second is 50% faster => speedups in all transforms that heavily use random generator
- Where possible moved from
cv2.LUT
tostringzilla lut
- Added parameter
mask_interpolation
to Compose that overrides mask interpolation value in all transforms in that Compose, now can use more accuratecv2.INTER_NEAREST_EXACT
for semantic segmentation and can work with depth and heatmap estimation using cubic, area, linear, etc
BugFixes
- Bugfix in ISONoise
- Bugfix: Ensure that transforms masks are contiguous arrays, by @Callidior
- Bugfix in Solarize
- Bugfix in bounding box filtering
- Bugfix in OpticalDistortion
- Bugfix in balanced scale in Affine
Albumentations 1.4.18 Release Notes
- Support Our Work
- Transforms
- Core
- Deprecations
- Bugfixes
Support Our Work
- Love the library? You can contribute to its development by becoming a sponsor for the library. Your support is invaluable, and every contribution makes a difference.
- Haven't starred our repo yet? Show your support with a ⭐! It's just only one mouse click away.
- Got ideas or facing issues? We'd love to hear from you. Share your thoughts in our issues or join the conversation on our Discord server
Transforms
GridDistortion
Added support for keypoints
GridDropout
Added support for keypoints
and bounding boxes
GridElasticDeform
Added support for keypoints
and bounding boxes
MaskDropout
Added support for keypoints
and bounding boxes
Morphological
Added support for bounding boxes
and keypoints
OpticalDistortion
Added support for keypoints
PixelDropout
Added support for keypoints
and bonding boxes
XYMasking
Added support for bounding boxes
and keypoints
Core
Added support for masks as numpy arrays of the shape (num_masks, height, width)
Now you can apply transforms to masks as:
masks = <numpy array with shape (num_masks, height, width)>
transform(image=image, masks=masks)
Deprecations
Removed MixUp as it was doing almost exactly the same as TemplateTransform
Bugfixes
- Bugfix in RandomFog
- Bugfix in PlankianJitter
- Several people reported issue with masks as list of numpy arrays, I guess it was fixed as a part of some other work as I cannot reproduce it. Just in case added tests for that case.
Albumentations 1.4.17 Release Notes
- Support Our Work
- Transforms
- Core
Support Our Work
- Love the library? You can contribute to its development by becoming a sponsor for the library. Your support is invaluable, and every contribution makes a difference.
- Haven't starred our repo yet? Show your support with a ⭐! It's just only one mouse click away.
- Got ideas or facing issues? We'd love to hear from you. Share your thoughts in our issues or join the conversation on our Discord server
Transforms
CoarseDropout
- Added Bounding Box support
remove_invisible=False
keeps keypoints
by @ternaus
ElasticTransform
Added support for keypoints
by @ternaus
Core
Added RandomOrder Compose
Select N transforms to apply. Selected transforms will be called in random order with force_apply=True.
Transforms probabilities will be normalized to one 1, so in this case transforms probabilities works as weights.
This transform is like SomeOf, but transforms are called with random order.
It will not replay random order in ReplayCompose.
Albumentations 1.4.16 Release Notes
- Support Our Work
- UI Tool
- Transforms
- Improvements and Bug Fixes
Support Our Work
- Love the library? You can contribute to its development by becoming a sponsor for the library. Your support is invaluable, and every contribution makes a difference.
- Haven't starred our repo yet? Show your support with a ⭐! It's just only one mouse click away.
- Got ideas or facing issues? We'd love to hear from you. Share your thoughts in our issues or join the conversation on our Discord server
UI Tool
For visual debug wrote a tool that allows visually inspect effects of augmentations on the image.
You can find it at https://explore.albumentations.ai/
- Works for all ImageOnly transforms
- Authorized users can upload their own images
it is work in progress. It is not stable and polished yet, but if you have feedback or proposals - just write in the Discord Server mentioned above.
Transforms
- Updated and extended docstrings in all ImageOnly transforms.
- All ImageOnly transforms support both
uint8
andfloat32
inputs
RandomSnow
Added texture
method to RandomSnow
RandomSunflare
Added physics_based
method to RandomSunFlare
Bugfixes and improvements
- Bugfix in albucore dependency. Now every
Albumnetations
version is tailored to a specificalbucore
version. Added pre-commit hook to automatically check it on every commit. - BugFix in TextImage transform, after rewriting bbox processing in a vectorized form, transform was failing.
- As a part of the work to remove scikit-image dependency @momincks rewrote bbox_affine in a plain numpy
- Bugfix. It was unexpected, but people use bounding bboxes that are less than 1 pixel. Removed constrant on a minimum bounding box being 1x1
- Bugfix in bounding box filtering. Now if all bounding boxes were filtered return not empty array, but empty array of shape (0, 4)
Albumentations 1.4.15 Release Notes
- Support Our Work
- UI Tool
- Core
- Transforms
- Improvements and Bug Fixes
Support Our Work
- Love the library? You can contribute to its development by becoming a sponsor for the library. Your support is invaluable, and every contribution makes a difference.
- Haven't starred our repo yet? Show your support with a ⭐! It's just only one mouse click away.
- Got ideas or facing issues? We'd love to hear from you. Share your thoughts in our issues or join the conversation on our Discord server
UI Tool
For visual debug wrote a tool that allows visually inspect effects of augmentations on the image.
You can find it at https://explore.albumentations.ai/
RIght now supports only ImageOnly transforms, and not all but a subset of them.
it is work in progress. It is not stable and polished yet, but if you have feedback or proposals - just write in the Discord Server mentioned above.
Core
Bounding box and keypoint processing was vectorized
- You can pass numpy array to compose and not only list of lists.
- Presumably transforms will work faster, but did not benchmark.
Transforms
Affine
- Reflection padding correctly works In
Affine
andShiftScaleRotate
CLAHE
- Added support for float32 images
Equalize
- Added support for float32 images
FancyPCA
- Added support for float32 images
- Added support for any number of channels
PixelDistributionAdaptation
- Added support for float32
- Added support for anyu number of channels
Flip
Still works, but deprecated. It was a very strange transform, I cannot find use case, where you needed to use it.
It was equivalent to:
OneOf([Transpose, VerticalFlip, HorizontalFlip])
Most likely if you needed transform that does not create artifacts, you should look at:
- Natural images =>
HorizontalFlip
(Symmetry group has 2 elements, meaning will effectively increase your dataset 2x) - Images that look natural when you vertically flip them =>
VerticalFlip
(Symmetry group has 2 elements, meaning will effectively increase your dataset 2x) - Images that need to preserve parity, for example texts, but we may expect rotated documents =>
RandomRotate90
(Symmetry group has 2 elements, meaning will effectively increase your dataset 4x) - Images that you can flip and rotate as you wish =>
D4
(Symmetry group has 8 elements, meaning will effectively increase your dataset 8x)
ToGray
Now you can define the number of output channels in the resulting gray image. All channels will be the same.
Extended ways one can get grayscale image. Most of them can work with any number of channels as input
weighted_average
: Uses a weighted sum of RGB channels(0.299R + 0.587G + 0.114B)
Works only with 3-channel images. Provides realistic results based on human perception.from_lab
: Extracts the L channel from the LAB color space.
Works only with 3-channel images. Gives perceptually uniform results.desaturation
: Averages the maximum and minimum values across channels.
Works with any number of channels. Fast but may not preserve perceived brightness well.average
: Simple average of all channels.
Works with any number of channels. Fast but may not give realistic results.max
: Takes the maximum value across all channels.
Works with any number of channels. Tends to produce brighter results.pca
: Applies Principal Component Analysis to reduce channels.
Works with any number of channels. Can preserve more information but is computationally intensive.
SafeRotate
Now uses Affine under the hood.
Improvements and Bug Fixes
- Bugfix in
GridElasticDeform
by @4pygmalion - Speedups in
to_float
andfrom_float
- Bugfix in PadIfNeeded. Did not work when empty bounding boxes were passed.
Albumentations 1.4.14 Release Notes
- Support Our Work
- Transforms
- Improvements and Bug Fixes
Support Our Work
- Love the library? You can contribute to its development by becoming a sponsor for the library. Your support is invaluable, and every contribution makes a difference.
- Haven't starred our repo yet? Show your support with a ⭐! It's just only one mouse click away.
- Got ideas or facing issues? We'd love to hear from you. Share your thoughts in our issues or join the conversation on our Discord server
Transforms
Added GridElasticDeform
transform
Grid-based Elastic deformation Albumentation implementation
This class applies elastic transformations using a grid-based approach.
The granularity and intensity of the distortions can be controlled using
the dimensions of the overlaying distortion grid and the magnitude parameter.
Larger grid sizes result in finer, less severe distortions.
Args:
num_grid_xy (tuple[int, int]): Number of grid cells along the width and height.
Specified as (grid_width, grid_height). Each value must be greater than 1.
magnitude (int): Maximum pixel-wise displacement for distortion. Must be greater than 0.
interpolation (int): Interpolation method to be used for the image transformation.
Default: cv2.INTER_LINEAR
mask_interpolation (int): Interpolation method to be used for mask transformation.
Default: cv2.INTER_NEAREST
p (float): Probability of applying the transform. Default: 1.0.
Targets:
image, mask
Image types:
uint8, float32
Example:
>>> transform = GridElasticDeform(num_grid_xy=(4, 4), magnitude=10, p=1.0)
>>> result = transform(image=image, mask=mask)
>>> transformed_image, transformed_mask = result['image'], result['mask']
Note:
This transformation is particularly useful for data augmentation in medical imaging
and other domains where elastic deformations can simulate realistic variations.
by @4pygmalion
PadIfNeeded
Now reflection padding correctly with bounding boxes and keypoints
by @ternaus
RandomShadow
- Works with any number of channels
- Intensity of the shadow is not hardcoded constant anymore but could be sampled
Simulates shadows for the image by reducing the brightness of the image in shadow regions.
Args:
shadow_roi (tuple): region of the image where shadows
will appear (x_min, y_min, x_max, y_max). All values should be in range [0, 1].
num_shadows_limit (tuple): Lower and upper limits for the possible number of shadows.
Default: (1, 2).
shadow_dimension (int): number of edges in the shadow polygons. Default: 5.
shadow_intensity_range (tuple): Range for the shadow intensity.
Should be two float values between 0 and 1. Default: (0.5, 0.5).
p (float): probability of applying the transform. Default: 0.5.
Targets:
image
Image types:
uint8, float32
Reference:
https://github.com/UjjwalSaxena/Automold--Road-Augmentation-Library
by @JonasKlotz
Improvements and Bug Fixes
- BugFix in
Affine
. Nowfit_output=True
works correctly with bounding boxes. by @ternaus - BugFix in
ColorJitter
. By @maremun - Speedup in
CoarseDropout
. By @thomaoc1 - Check for updates does not use
logger
anymore. by @ternaus - Bugfix in
HistorgramMatching
. Before it output array of ones. Now works as expected. by @ternaus
1.4.13
Albumentations 1.4.12 Release Notes
- Support Our Work
- Transforms
- Core Functionality
- Deprecations
- Improvements and Bug Fixes
Support Our Work
- Love the library? You can contribute to its development by becoming a sponsor for the library. Your support is invaluable, and every contribution makes a difference.
- Haven't starred our repo yet? Show your support with a ⭐! It's just only one mouse click away.
- Got ideas or facing issues? We'd love to hear from you. Share your thoughts in our issues or join the conversation on our Discord server
Transforms
Added TextImage transform
Allows adding text on top of images. Works with np,unit8
and np.float32
images with any number of channels.
Additional functionalities:
- Insert random stopwords
- Delete random words
- Swap word order
Core functionality
Added images
target
You can now apply the same transform to a list of images of the same shape, not just one image.
Use cases:
- Video: Split video into frames and apply the transform.
- Slices of 3D volumes: For example, in medical imaging.
import albumentations as A
transform = A.Compose([A.Affine(p=1)])
transformed = transform(images=<list of images>)
transformed_images = transformed["images"]
Note:
You can apply the same transform to any number of images, masks, bounding boxes, and sets of keypoints using the additional_targets functionality notebook with examples
Contributors @ternaus, @ayasyrev
get_params_dependent_on data
Relevant for those who build custom transforms.
Old way
@property
def targets_as_params(self) -> list[str]:
return <list of targets>
def get_params_dependent_on_targets(self, params: dict[str, Any]) -> dict[str, np.ndarray]:
image = params["image"]
....
New way
def get_params_dependent_on_data(self, params: dict[str, Any], data: dict[str, Any]) -> dict[str, np.ndarray]:
image = data["image"]
Contributor @ayasyrev
Added shape
to params
Old way:
def get_params_dependent_on_targets(self, params: dict[str, Any]) -> dict[str, np.ndarray]:
image = params["image"]
shape = image.shape
New way:
def get_params_dependent_on_data(self, params: dict[str, Any], data: dict[str, Any]) -> dict[str, np.ndarray]:
shape = params["shape"]
Contributor @ayasyrev
Deprecations
Elastic Transform
Deprecated parameter alpha_affine
in ElasticTransform
. To have Affine effects on your image, use the Affine
transform.
Contributor @ternaus