Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Shift CTLearn imagemapper methods to DL1DH #50

Merged
merged 18 commits into from
Jun 14, 2019
Merged

Conversation

TjarkMiener
Copy link
Member

This PR is about the integration of the CTLearn image mapping methods into DL1DH. These image mapping methods are converting a 1D array into a 2D image, which consists of a square pixel grid. We implemented five different conversion methods and two methods, where we are shifting the camera pixels (only for cameras with hexagonal layout) to arrange a square grid. The structure of the implementation are the same for all methods! We perform all costly calculations of the methods beforehand and independent of the event images. The results are stored in mapping tables, which contains the assignment information. During the training process, the mapping tables are applied to each event images, which ensure us to preserve the fast runtime. As a result, we got almost the same runtime for all conversion methods.

Conversion methods:

  • Oversampling converts each hexagonal pixel to four square pixels and reweight the charge to realign to a square grid in order to preserve the intensity.
  • Nearest interpolation assign the pixel charge to the nearest neighbor of the square grid points.
  • Rebinning interpret the intensity as a histogram and rebin the hexagonal histogram into a square one. Since our implementation approximately rebin the histogram, the intensity is only preserved to a certain point.
    Screenshot 2019-05-07 at 03 08 18 Credits: Holch
    We are currently calculating the areas like this: We put 100 points per square pixels and then perform NN to determine in which hexagon the points are located. Improvements or/and ideas to calculate the areas analytically are welcome!
  • For bilinear and bicubic interpolation, we add artificial pixels with zero intensity outside of the camera to draw Delaunay triangulation on the hexagonal grid. Then we perform bilinear and bicubic interpolation and norm the mapping tables, which approximately preserve the intensity.

All conversion methods are also available for the square pixel cameras. A requirement of these methods is the alignment of the camera pixel positions. So for LSTCam, NectarCam and MAGICCam we have to rotate the pixel position before applying the algorithm. We added the option 'rotate_camera' in the image_mapper, so that we are able rotate back the mapping tables to get the original shower orientation. However, an extra interpolation (scipy.ndimage.rotate) is needed for this step. There are two other important option in the image_mapper. The first one is interpolation_image_shape, which specifies the shape of the output grid. The second one is mask_interpolation. This will set all output grid points to zero, if there are located outside of the camera. If you only want to consider image charges in your analysis, the default 'channels' option should be selected.

The image below shows the impact of the different conversion methods on a simulated proton event captured by FlashCam.

FlashCam_conversion_methods

Some preliminary comparing results of the different methods with the old CTLearn dataset can be found here. Bilinear interpolation and rebinning seems to perform the best, while bicubic and nearest seems to perform the worst. The performance of oversampling is in-between.

Shifting methods:
We implemented two shifting methods, which are used for several hexagonal convolution packages. The first one is axial addressing, which is used in IndexedConv.
This is the axial addressing of the LSTCam with dummy data (the index of the camera pixel is set as charge intensity):
LSTCam_axial_addressing

@vuillaut @mikael10j Is this orientation of the axial addressing of the LSTCam the same, which is used in IndexedConv? Did you used ctapipe/instrument/camera.py to perform the axial addressing and get the index matrix? Is there anything more needed for apply the IndexedConv packages?

The second method is called image_shifting and basically shift every second column of about a half of a hexagon length. This procedure is used by HexagDly and maybe HexaConv(?).

TjarkMiener and others added 7 commits May 7, 2019 07:28
…apper

Former-commit-id: cc30022
Former-commit-id: 927d265
Former-commit-id: e59c2a5
Removed ImageMapper dependency on fits files, modified to use ctapipe.instrument.CameraGeometry.
Minor fix to reader's call to ImageMapper.
Added Jupyter Notebook.


Former-commit-id: 6189d32
Former-commit-id: 7b89d14
Former-commit-id: 94993d1
Former-commit-id: 0d3727b
Former-commit-id: 562d1a7
Former-commit-id: 3270977
Temporarily removed FACT mapping table due to shape mismatch issue.

Finished data reader part of demo notebook.


Former-commit-id: ca26928
Former-commit-id: ccaed83
Former-commit-id: 4c420dc
setup.py Show resolved Hide resolved
dl1_data_handler/image_mapper.py Outdated Show resolved Hide resolved
dl1_data_handler/image_mapper.py Outdated Show resolved Hide resolved
dl1_data_handler/image_mapper.py Outdated Show resolved Hide resolved
dl1_data_handler/image_mapper.py Outdated Show resolved Hide resolved
dl1_data_handler/reader.py Outdated Show resolved Hide resolved
dl1_data_handler/image_mapper.py Outdated Show resolved Hide resolved
dl1_data_handler/image_mapper.py Outdated Show resolved Hide resolved
dl1_data_handler/reader.py Outdated Show resolved Hide resolved
@aribrill
Copy link
Collaborator

aribrill commented May 7, 2019

As a note, there are a couple of other outstanding issues relating to image mapping. I think these could be handled in separate PRs.

Add method to return unmapped vectors: #49
Fix bug when both normalizing and masking: ctlearn-project/ctlearn#85

Masking means to explicitly zero out all output pixels not nearest to positions of hexagonal camera pixels. The purpose is to remove an edge artifact caused by interpolation between real pixels and 0-valued artificial pixels just outside the camera, which would primarily affect partial/off-axis events. The problem is that doing this breaks the strict normalization of the mapping table. So far in practice the effect of this artifact seems minimal, as does the effect of masking in general.

@aribrill
Copy link
Collaborator

aribrill commented May 7, 2019

I added a Jupyter notebook demonstrating the performance of the image mapper.

GitHub's notebook viewer isn't working for me. If you have the same problem, nbviewer should work:
dl1_data_handler_demo
test_image_mapper

aribrill added a commit to ctlearn-project/ctlearn that referenced this pull request May 7, 2019
A similar notebook has been added in DL1DH in:
cta-observatory/dl1-data-handler#50
@TjarkMiener
Copy link
Member Author

TjarkMiener commented May 8, 2019

@aribrill The mapping tables are sparse 3D matrices with the shape (number of camera pixels, output/image size, output/image size). The mapping tables contain weights, which are needed to compute the intensity of the output pixels. The intensity is a linear combination of the intensities of the camera pixels. The shape of the mapping tables is 100% needed for the conversion methods, but not for the shifting methods. Here the camera pixels will only be shifted, which means we don't have a linear combination and the weights, we are storing, are either 0 or 1. I implemented the shifting methods the same way as the conversion methods to have a clean map_image() function. However, I now strongly believe we can shrink the mapping table into one single matrix for the shifting method. Rather than storing weights, we will store the indexes. Is this is the input, which is needed for the IndexedConv packages, @vuillaut @mikael10j?

@mikael10j
Copy link

Rather than storing weights, we will store the indexes. Is this is the input, which is needed for the IndexedConv packages, @vuillaut @mikael10j?

In a general way, you need to provide the IndexedConv with the list of the neighbours (their indices) of each pixel of interest. To compute this list for hexagonal pixel images, the package has already functions based on the axial addressing system. If the indices you're storing respect the axial addressing system, you can use them directly, otherwise you need to create the functions to extract the list of neighbours.
For the record, we have added to the package the functions to build this list based on Delaunay tessellation. We should release it in the following weeks.

@mikael10j
Copy link

@vuillaut @mikael10j Is this orientation of the axial addressing of the LSTCam the same, which is used in IndexedConv? Did you used ctapipe/instrument/camera.py to perform the axial addressing and get the index matrix? Is there anything more needed for apply the IndexedConv packages?

The orientation looks like the one we have. To perform the axial addressing we've so far used functions created by Pierre and I wasn't aware of the ability of ctapipe to do it. Thank you for pointing it out. I need to check if the result is the same.

@mikael10j
Copy link

The orientation looks like the one we have.

Actually our orientation is not the same.
Here is the default geometry in ctapipe:

CameraGeometry: "LSTCam"

  • num-pixels: 1855
  • pixel-type: hexagonal
  • sensitive-area: 4.016016000618191 m2
  • pix-rotation: 100.893 deg
  • cam-rotation: 0.0 deg
    image

In our case we derotate the image first:

CameraGeometry: "LSTCam"

  • num-pixels: 1855
  • pixel-type: hexagonal
  • sensitive-area: 4.016016386167223 m2
  • pix-rotation: 0.0 deg
  • cam-rotation: -100.893 deg
    image

and then use a function created by Pierre (in hipecta) to find the position of the pixels in the axial addressing system. From this we create the index matrix of the image:
image
fake pixels have an index of -1.

@maxnoe
Copy link
Member

maxnoe commented May 17, 2019

As discussed here: cta-observatory/ctapipe#255

The derotation should probably not be done. If the LST camera has a flat top, the coordinate system should be in the HESS MC coordinate system.

@TjarkMiener
Copy link
Member Author

TjarkMiener commented May 22, 2019

Thanks @mikael10j for clarifying the orientation!
The commit bc2db97 solve the orientation discrepancy. I added a function to retrieve the index matrix (input of IndexedConv) in commit bd1e66c. @mikael10j @vuillaut, could you please check and confirm that this is right input for IndexedConv?

The rotation/de-rotation can be quite confusing! Thanks for linking us to the useful conversation, @maxnoe! To apply the axial addressing, we have to align the hexagonal grid with the x- and y-axes. This is why it's obligatory to de-rotate the affected camera types. The de-rotation is also obligatory for all conversion methods, but here we added a second rotation, which rotate the camera back to the original (right) orientation.

@mikael10j
Copy link

mikael10j commented May 28, 2019

Hi @TjarkMiener , I've compared the index matrix returned by ImageMapper and the one that we build and it appears that the former is flipped upside down compared to the latter.
im = ImageMapper(camera_types=['LSTCam'], mapping_method={'LSTCam': 'axial_addressing'}) dl1_index_matrix = im.get_indexmatrix('LSTCam') plt.imshow(dl1_index_matrix)
image
plt.imshow(ic_index_matrix.squeeze(0).squeeze(0))
image
torch.allclose(torch.tensor(np.ascontiguousarray(np.flip(dl1_index_matrix, 0)), dtype=torch.float32), ic_index_matrix.squeeze(0).squeeze(0))
True

@TjarkMiener
Copy link
Member Author

Thanks, @mikael10j! In CTLearn the indexes of our images used to start from bottom left corner, while the indexes of your images start from top left corner. All images produced by the DL1DH should have that flip then! What would be the smartest way of solving this? Adding an option to the ImageMapper that will perform the flip on the output images? We can also perform the flip on the mapping tables (in the init), but there we have to perform the flip "number of camera pixels" times. @aribrill @bryankim96 @nietootein

I'm getting the same orientation (in CTLearn indexing):
LSTCam_axial

@TjarkMiener
Copy link
Member Author

We adopted your convention, @mikael10j @vuillaut. Now the output of map_image() and get_indexmatrix() should be exactly what you had before. Can you please check?

@mikael10j
Copy link

Thank you @TjarkMiener , I check today.

@mikael10j
Copy link

Looks good !

Copy link
Member

@nietootein nietootein left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems that the Travis error is fixed by #61, so I suggest to go ahead and merge!

@TjarkMiener TjarkMiener merged commit 73569a1 into master Jun 14, 2019
@TjarkMiener TjarkMiener deleted the shift_imagemapper branch January 13, 2022 15:48
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants