Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Repeated warping #144

Open
yakir12 opened this issue Aug 31, 2021 · 6 comments
Open

Repeated warping #144

yakir12 opened this issue Aug 31, 2021 · 6 comments

Comments

@yakir12
Copy link
Contributor

yakir12 commented Aug 31, 2021

Thank you for this awesome package!

I'm auto-tracking animal movements in videos. I'm using VideoIO.jl to extract frames from the video, resize them (uniform scale and fixing the Storage Aspect Ratio of the video), and then save a diagnostic short video of the results.

I'm using imresize to do that, but can't help thinking that there must be a way to save some of the computations if the transform is identical for each frame: the original images as well as the resized images are always the same size. Is there a way to calculate the transformation once, and then apply it multiple times to each of the frames?

@johnnychen94
Copy link
Member

Is there a way to calculate the transformation once, and then apply it multiple times to each of the frames?

Calculating the transformation won't be the major performance hotspot, so what benefits does it give? BTW, there's also an in-place version imresize!.

@yakir12
Copy link
Contributor Author

yakir12 commented Aug 31, 2021

I'm sure you are correct, but that depends on the number of frames I'm processing. But perhaps the number of frames needed for this to start making a difference is in the thousands -- I don't know how much of the time goes to the interpolation versus calculating the transform.

More importantly, it would allow me to use the viewed versions of the warp functions. In my case I'm looking for the next location of the tracked animal only in the vicinity of the last know location, so there's no need to scale the whole image, just the region of interest.

And the reason I don't simply build my own transformation with CoordinateTransformations.jl is due to VideoIO.jl's requirement that the dimensions of a saved frame be a multiple of 2. So I need some way of guaranteeing that scaled image has pair height and width. That is straightforward with imresize.

@johnnychen94
Copy link
Member

johnnychen94 commented Aug 31, 2021

Maybe you can instead build a lookup table of coordinates and calling warp!, and see how that improves the performance. Something like #64 (comment) but backed by a pre-computed lookup table, with the core coordinate computation reduced to getindex.

To call warp! we current need to build our own extrapolation, for the simplest bilinear case with zero fill value we can do etp = extrapolate(interpolate(img, BSpline(Linear())), zero(eltype(img)))

@yakir12
Copy link
Contributor Author

yakir12 commented Aug 31, 2021

I might not follow you exactly, but to be clear this is the kind of transform I'm using:

scale = 4
sar = 4/3
scaleh = scale * sar
scalew = scale
tform = LinearMap(SDiagonal(scaleh, scalew))

pretty standard stuff. I'm just shrinking the image by a factor of 4 and fixing its aspect ratio. So no need to use a novel lookup table...?

I think I need to either get my hands on the transform imresize is using, or be able to produce a LinearMap that guarantees the resulting image has dimensions that are multiples of 2.

@johnnychen94
Copy link
Member

johnnychen94 commented Aug 31, 2021

Building a transformation map won't take much time, it is applying map(i-> tform(SVector(i.I)), CartesianIndices(img)) that takes time, so if you want to save the computation, you can pre-calculate lookup_table = map(i-> tform(SVector(i.I)), CartesianIndices(img)) and then define a function to directly query from this table instead of computing it for every frame.

@johnnychen94
Copy link
Member

johnnychen94 commented Aug 31, 2021

The linear coordinate transform that imresize uses is defined this way:

# Define the equivalent of an affine transformation for mapping
# locations in `resized` to the corresponding position in
# `original`. We take the viewpoint that a pixel at `i, j` is a
# sensor that *integrates* the intensity over an area spanning
# `i±0.5, j±0.5` (this is a good model of how a camera pixel
# actually works). We then map the *outer corners* of the two
# images to each other, i.e., in typical cases
# (0.5, 0.5) -> (0.5, 0.5) (outer corner, top left)
# size(resized)+0.5 -> size(original)+0.5 (outer corner, lower right)
# This ensures that both images cover exactly the same area.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants