-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add ImageNet #146
base: master
Are you sure you want to change the base?
Add ImageNet #146
Conversation
Codecov Report
📣 This organization is not using Codecov’s GitHub App Integration. We recommend you install it so Codecov can continue to function properly for your repositories. Learn more @@ Coverage Diff @@
## master #146 +/- ##
==========================================
- Coverage 48.56% 47.23% -1.33%
==========================================
Files 44 47 +3
Lines 2261 2335 +74
==========================================
+ Hits 1098 1103 +5
- Misses 1163 1232 +69
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. |
Will be good to have ImageNet support! I'm wondering if there may be a simpler implementation for this, though. It seems the dataset has the same format as the (derived) ImageNette and ImageWoof datasets. The way those are loaded in FastAI.jl combines the MLUtils.jl primitives and those could be used to load ImageNet as folows: using MLDatasets, MLUtils, FileIO
function ImageNet(dir)
files = FileDataset(identity, path, "*.JPEG").paths
return mapobs((FileIO.load, loadlabel), files)
end
# get the class name from the file path. could add a lookup here to convert the ID to the human-readable name
loadlabel(file::String) = split(file, "/")[end-2]
data = ImageNet(IMAGENET_DIR)
# only training set
data = ImageNet(joinpath(IMAGENET_DIR, "train")) I'd also suggest using FileIO.jl for loading images which will the faster JpegTurbo.jl to load the images. If more control over the image loading is desired, like converting to a color upon reading or loading an image into a smaller size (much faster if it'll be downsized during training anyway) , one could also use JpegTurbo.jl directly: function ImageNet(dir; C = RGB{N0f8}, preferred_size = nothing)
files = FileDataset(identity, path, "*.JPEG").paths
return mapobs((f -> JpegTurbo.jpeg_decode(C, f; preferred_size), loadlabel), files)
end
# load as grayscale and smaller image size
data = ImageNet(IMAGENET_DIR; C = Gray{N0f8}, preferred_size = (224, 224)) |
Thanks a lot, loading smaller images with JpegTurbo is indeed much faster! |
JpegTurbo's |
I've done some local benchmarks: julia> using MLDatasets
julia> dataset = ImageNet(Float32, :val);
julia> @benchmark dataset[1:16]
BenchmarkTools.Trial: 44 samples with 1 evaluation.
Range (min … max): 104.413 ms … 143.052 ms ┊ GC (min … max): 7.28% … 18.57%
Time (median): 113.164 ms ┊ GC (median): 10.80%
Time (mean ± σ): 115.515 ms ± 9.030 ms ┊ GC (mean ± σ): 10.46% ± 3.68%
▃ █
▇▄█▄▁▁▄▄▇▇▇▄▄█▄▇▄▄▇▁▄▁▄▁▁▄▁▄▄▁▄▁▁▄▄▁▁▁▄▁▁▁▁▁▄▁▄▁▁▁▄▁▁▁▁▁▁▁▁▁▄ ▁
104 ms Histogram: frequency by time 143 ms <
Memory estimate: 131.78 MiB, allocs estimate: 2050. Without julia> @benchmark dataset[1:16]
BenchmarkTools.Trial: 57 samples with 1 evaluation.
Range (min … max): 80.594 ms … 103.226 ms ┊ GC (min … max): 7.43% … 19.03%
Time (median): 86.954 ms ┊ GC (median): 8.95%
Time (mean ± σ): 88.287 ms ± 5.683 ms ┊ GC (mean ± σ): 10.90% ± 3.57%
▄ ▄ ▁▁ █▄ ▁▄ ▁ ▁ ▁▁ ▁ ▄ ▁ ▁ ▁ ▁
▆▆█▆█▁██▁██▆▁██▆▁▆█▁▁▆▁█▁▆██▆█▁▁▁█▆▁▆█▁█▁▁▁█▁▁▁▆▁▁▁▁▁▁▁█▁▁▁▆ ▁
80.6 ms Histogram: frequency by time 101 ms <
Memory estimate: 115.96 MiB, allocs estimate: 1826. Additionally using StackedViews.jl for batching: julia> @benchmark dataset[1:16]
BenchmarkTools.Trial: 95 samples with 1 evaluation.
Range (min … max): 47.971 ms … 73.503 ms ┊ GC (min … max): 0.00% … 8.68%
Time (median): 51.116 ms ┊ GC (median): 0.00%
Time (mean ± σ): 52.903 ms ± 4.922 ms ┊ GC (mean ± σ): 4.69% ± 5.81%
▂ ▂▄█
█▇█████▃▅▃▅▅▆▃█▇▇▅▅▅▁▅▁▁▃▃▆▆▁▁▁▁▁▅▃▃▁▃▁▁▁▁▁▁▁▁▁▃▁▁▁▃▁▁▁▁▁▁▃ ▁
48 ms Histogram: frequency by time 70 ms <
Memory estimate: 38.43 MiB, allocs estimate: 1499. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm against @lazy import ImageCore
(and @lazy import ImageShow
) because it's very likely to hit the world-age issue if not used carefully. I mean, if this is a safe solution I'll be the first one to refactor the JuliaImages ecosystem this way. But since @CarloLucibello is the actual maintainer of this package, I'll leave this decision to him.
|
||
# Load image from ImageNetFile path and preprocess it to normalized 224x224x3 Array{Tx,3} | ||
function readimage(Tx::Type{<:Real}, file::AbstractString) | ||
im = JpegTurbo.jpeg_decode(ImageCore.RGB{Tx}, file; preferred_size=IMGSIZE) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure if all ImageNet images meets the requirement, but note that the actual decomposed result size size(im)
might not be preferred_size
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm actually running into warnings with images smaller than preferred_size
:
┌ Warning: Failed to infer appropriate scale ratio, use `scale_ratio=2` instead.
│ actual_size = (127, 100)
│ preferred_size = (224, 224)
└ @ JpegTurbo ~/.julia/packages/JpegTurbo/b5MSG/src/decode.jl:165
do you have experience with this @lorenzoh ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The reason for this is that JpegTurbo.jl (or libjpegt-turbo) only supports a very limited range of scale_ratio
: they are scale_ratio
is 2. This is exactly why size(img) == preferred_size
may not hold in practice.
The supported scale_ratio
permits a faster decoding algorithm (by scaling the coefficients instead of the actual images), this is why we can observe the performance boost here.
The perhaps safest (I think) solution is to add a imresize
after it:
img = @suppress_err JpegTurbo.jpeg_decode(file; preferred_size=(224, 224))
if size(img) != (224, 224)
img = imresize(img, (224, 224))
end
The @suppress_err
macro is a handy tool from https://github.com/JuliaIO/Suppressor.jl to disable this warning message.
I don't plan to make this imresize
happen automatically in JpegTurbo.jl because it would otherwise break people's expectation on "keyword preferred_size
can make decoding faster"
Thanks for the review @Dsantra92! |
The order of the classes in the metadata also still has to be fixed as it doesn't match https://raw.githubusercontent.com/pytorch/hub/master/imagenet_classes.txt. |
Sorry for stalling this. I guess the issue with this PR boils down to whether preprocessing functions belong to MLDatasets or to packages exporting pre-trained models. This question has already been raised in FluxML/Metalhead.jl#117. Since images in ImageNet have different dimensions, providing an ImageNet data loader with matching preprocessing functions would be somewhat useless, as it would not be able to load batches of data. And as discussed here in the context of I took a look at how other Deep Learning frameworks deal with this and both torchvision and Keras Applications export preprocessing functions with their pre-trained models. MLDataset's |
Hey everyone, this looks awesome. Is anyone still working on this? Otherwise I would suggest trying to merge this, even if it's not "perfect" with regards to extra dependencies or open questions about transformations. |
I'm still interested in working on this. To get this merged, we could make the Edit: |
this needs a rebase, otherwise looks mostly good |
const PYTORCH_MEAN = [0.485f0, 0.456f0, 0.406f0] | ||
const PYTORCH_STD = [0.229f0, 0.224f0, 0.225f0] | ||
|
||
normalize_pytorch(x) = (x .- PYTORCH_MEAN) ./ PYTORCH_STD | ||
inv_normalize_pytorch(x) = x .* PYTORCH_STD .+ PYTORCH_MEAN |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would drop the pytorch
prefix/suffix and use something else. The comments can stay. If PyTorch isn't the only library that does this preprocessing, then it makes sense to represent that with more general names. If different libraries are providing different preprocessing functionality for ImageNet (or not providing any), then I'd argue there is no canonical default set of ImageNet transformations and this code (aside from maybe the descriptive stats) shouldn't be in MLDatasets.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point. Since this is just an internal function used by default_preprocess
, I would suggest either _normalize
or default_normalize
. The appeal of using these coefficients as defaults is that they should work out of the box with pre-trained vision models from Metalhead.jl.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wait, so do other libraries provide this functionality in their ImageNet dataset APIs? I checked https://www.tensorflow.org/datasets/catalog/imagenet2012 and it has no mention of preprocessing, so is PyTorch the only library that does this? If so, I would vote to remove the preprocessing functions as mentioned above.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I am not wrong, these normalization values depend on the model you are using. Also, none of the existing vision datasets have preprocessing functions. These functions are ideally handled by data preprocessing libraries/modules.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The norm values should not be model-specific. They're derived directly from the data before any model is involved.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the pytorch case, notice however that although the transformations are stored in the "model weights", the mean and std is the same across models (see e.g. the mobilenet model).
In a similar spirit, I would definitely defend the decision of shipping the set of transformations (cropping, interpolation, linear transformation, etc) as part of the dataset. However I agree with the very first point that the name transformation_pytorch
isn't really precise, although I think it is fair to link to the corresponding transformations for tensorflow, pytorch, and/or the timm library in a related comment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PyTorch also lumps code for pretrained models, data augmentations and datasets into one library, I don't think we need to follow their every example :)
In a similar spirit, I would definitely defend the decision of shipping the set of transformations (cropping, interpolation, linear transformation, etc) as part of the dataset.
This is precisely why I asked about what other libraries are doing. If nobody else is shipping the same set of transformations, then they can hardly be considered canonical for ImageNet. That doesn't mean we should never ship helpers to create common augmentation pipelines, but that it is better served by packages which have access to efficient augmentation libraries (e.g. Augmentor, DataAugmentation) and not by some unoptimized implementation which is simultaneously more general (because it's applicable to other datasets) and less general (because many papers using ImageNet do not use these augmentations) than the dataset it's been attached to.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's just apply the channelview
and permute transformation by default here,
and make the (permuted) mean and std values be part of the type
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have also taken a look at Keras' ImageNet utilities. While these normalization constants are used in many places throughout torchvision and PyTorch, it looks like TensorFlow and Keras do indeed use their own constants.
I agree with @ToucheSir's sentiment
If nobody else is shipping the same set of transformations, then they can hardly be considered canonical for ImageNet.
However, this point can be drawn even further, as nothing about ImageNet is truly canonical.
To give some examples (some of which have previously been discussed):
- There is no canonical reason why Images have to be loaded in 224 x 224 format.
- There is no canonical reason to apply the resizing algorithm JpegTurbo.jl uses when calling
jpeg_decode
with apreferred_size
. - There is no canonical way of sorting class labels. Some sort by WordNet ID (e.g. PyTorch), others don't.
Getting this merged
So make this dataloader as "unopinionated" as possible, we could just make it a very thin wrapper around FileDataset
which just loads metadata. This would require the user to pass a loadfn
which handles the transformation from file path to array. Class ordering could be handled using a sort_by_wnid=True
keyword argument and all new dependencies introduced in this PR could be removed (ImageCore, JpegTurbo and StackViews).
Future work
However, I do strongly feel like some package in the wider Julia ML / Deep Learning ecosystem should export loadfn
s that are usable with Metalhead's PyTorch models out of the box. @lorenzoh previously proposed adding such functionality to DataAugmentation.jl in FluxML/Metalhead.jl#117.
Once this functionality is available somewhere, ImageNet
's docstring in MLDataset should be updated to showcase this common use-case.
Until this functionality exists, I would suggest adding a "Home" => "Tutorials" => "ImageNet"
page to the MLDatasets docs which implements the current load function.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it looks like TensorFlow and Keras do indeed use their own constants.
Nice find. I was not expecting that mode == "torch"
conditional.
However, this point can be drawn even further, as nothing about ImageNet is truly canonical. To give some examples (some of which have previously been discussed):
1. There is no canonical reason why Images have to be loaded in 224 x 224 format. 2. There is no canonical reason to apply the resizing algorithm JpegTurbo.jl uses when calling `jpeg_decode` with a `preferred_size`. 3. There is no canonical way of sorting class labels. Some sort by WordNet ID (e.g. PyTorch), others don't.
The difference here is that all three of those points can have a decent fallback without depending on external packages. Another argument is that more people will rely on these defaults than won't. I'm not sure augmentations pass that threshold.
I'm not saying users shouldn't be able to pass in a transformation function, but identity
or some such seems a more defensible default. Indeed, the torchvision ImageNet
class does not do any additional transforms by default, so we'd be deviating from every other library if we stuck with this default centre crop.
In case someone is still interested in using this, I've opened a unregistered repository containing this PR: The most notable difference is that ImageNetDataset.jl contains some custom preprocessing pipelines that support |
Draft PR to add the ImageNet 2012 Classification Dataset (ILSVRC 2012-2017) as a
ManualDataDep
.Closes #100.
Since ImageNet is very large (>150 GB) and requires signing up and accepting the terms of access, it can only be added manually. The
ManualDataDep
instruction message for ImageNet includes the following:When unpacked "PyTorch-style", the ImageNet dataset is assumed to look as follows:
ImageNet -> split-folder -> WordNet ID folder -> class samples as jpg-files
, e.g.:Current limitations
Since ImageNet is too large to precompute all preprocessed images and keep them in memory, the dataset precomputes a list of all file paths instead.
CallingBase.getindex(d::ImageNet, i)
loads the image via ImageMagick.jl and preprocesses it when required. This adds dependencies on ImageMagick and Images.jl via LazyModules.This also means that the
ImageNet
struct currently doesn't containfeatures
(which might be a requirement forSupervisedDataset
s?)