-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Shift CTLearn imagemapper methods to DL1DH #50
Conversation
…the real orientation
As a note, there are a couple of other outstanding issues relating to image mapping. I think these could be handled in separate PRs. Add method to return unmapped vectors: #49 Masking means to explicitly zero out all output pixels not nearest to positions of hexagonal camera pixels. The purpose is to remove an edge artifact caused by interpolation between real pixels and 0-valued artificial pixels just outside the camera, which would primarily affect partial/off-axis events. The problem is that doing this breaks the strict normalization of the mapping table. So far in practice the effect of this artifact seems minimal, as does the effect of masking in general. |
Corrected in ctapipe-extra v0.2.17
I added a Jupyter notebook demonstrating the performance of the image mapper. GitHub's notebook viewer isn't working for me. If you have the same problem, nbviewer should work: |
A similar notebook has been added in DL1DH in: cta-observatory/dl1-data-handler#50
@aribrill The mapping tables are sparse 3D matrices with the shape (number of camera pixels, output/image size, output/image size). The mapping tables contain weights, which are needed to compute the intensity of the output pixels. The intensity is a linear combination of the intensities of the camera pixels. The shape of the mapping tables is 100% needed for the conversion methods, but not for the shifting methods. Here the camera pixels will only be shifted, which means we don't have a linear combination and the weights, we are storing, are either 0 or 1. I implemented the shifting methods the same way as the conversion methods to have a clean map_image() function. However, I now strongly believe we can shrink the mapping table into one single matrix for the shifting method. Rather than storing weights, we will store the indexes. Is this is the input, which is needed for the IndexedConv packages, @vuillaut @mikael10j? |
In a general way, you need to provide the IndexedConv with the list of the neighbours (their indices) of each pixel of interest. To compute this list for hexagonal pixel images, the package has already functions based on the axial addressing system. If the indices you're storing respect the axial addressing system, you can use them directly, otherwise you need to create the functions to extract the list of neighbours. |
The orientation looks like the one we have. To perform the axial addressing we've so far used functions created by Pierre and I wasn't aware of the ability of ctapipe to do it. Thank you for pointing it out. I need to check if the result is the same. |
As discussed here: cta-observatory/ctapipe#255 The derotation should probably not be done. If the LST camera has a flat top, the coordinate system should be in the HESS MC coordinate system. |
Thanks @mikael10j for clarifying the orientation! The rotation/de-rotation can be quite confusing! Thanks for linking us to the useful conversation, @maxnoe! To apply the axial addressing, we have to align the hexagonal grid with the x- and y-axes. This is why it's obligatory to de-rotate the affected camera types. The de-rotation is also obligatory for all conversion methods, but here we added a second rotation, which rotate the camera back to the original (right) orientation. |
Hi @TjarkMiener , I've compared the index matrix returned by ImageMapper and the one that we build and it appears that the former is flipped upside down compared to the latter. |
Thanks, @mikael10j! In CTLearn the indexes of our images used to start from bottom left corner, while the indexes of your images start from top left corner. All images produced by the DL1DH should have that flip then! What would be the smartest way of solving this? Adding an option to the ImageMapper that will perform the flip on the output images? We can also perform the flip on the mapping tables (in the init), but there we have to perform the flip "number of camera pixels" times. @aribrill @bryankim96 @nietootein |
…; delete default values in np.arange()
We adopted your convention, @mikael10j @vuillaut. Now the output of map_image() and get_indexmatrix() should be exactly what you had before. Can you please check? |
Thank you @TjarkMiener , I check today. |
Looks good ! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems that the Travis error is fixed by #61, so I suggest to go ahead and merge!
This PR is about the integration of the CTLearn image mapping methods into DL1DH. These image mapping methods are converting a 1D array into a 2D image, which consists of a square pixel grid. We implemented five different conversion methods and two methods, where we are shifting the camera pixels (only for cameras with hexagonal layout) to arrange a square grid. The structure of the implementation are the same for all methods! We perform all costly calculations of the methods beforehand and independent of the event images. The results are stored in mapping tables, which contains the assignment information. During the training process, the mapping tables are applied to each event images, which ensure us to preserve the fast runtime. As a result, we got almost the same runtime for all conversion methods.
Conversion methods:
We are currently calculating the areas like this: We put 100 points per square pixels and then perform NN to determine in which hexagon the points are located. Improvements or/and ideas to calculate the areas analytically are welcome!
All conversion methods are also available for the square pixel cameras. A requirement of these methods is the alignment of the camera pixel positions. So for LSTCam, NectarCam and MAGICCam we have to rotate the pixel position before applying the algorithm. We added the option 'rotate_camera' in the image_mapper, so that we are able rotate back the mapping tables to get the original shower orientation. However, an extra interpolation (scipy.ndimage.rotate) is needed for this step. There are two other important option in the image_mapper. The first one is interpolation_image_shape, which specifies the shape of the output grid. The second one is mask_interpolation. This will set all output grid points to zero, if there are located outside of the camera. If you only want to consider image charges in your analysis, the default 'channels' option should be selected.
The image below shows the impact of the different conversion methods on a simulated proton event captured by FlashCam.
Some preliminary comparing results of the different methods with the old CTLearn dataset can be found here. Bilinear interpolation and rebinning seems to perform the best, while bicubic and nearest seems to perform the worst. The performance of oversampling is in-between.
Shifting methods:
![LSTCam_axial_addressing](https://user-images.githubusercontent.com/37835610/57266588-77cdbd00-7074-11e9-8c1f-5c860fc3dc54.png)
We implemented two shifting methods, which are used for several hexagonal convolution packages. The first one is axial addressing, which is used in IndexedConv.
This is the axial addressing of the LSTCam with dummy data (the index of the camera pixel is set as charge intensity):
@vuillaut @mikael10j Is this orientation of the axial addressing of the LSTCam the same, which is used in IndexedConv? Did you used ctapipe/instrument/camera.py to perform the axial addressing and get the index matrix? Is there anything more needed for apply the IndexedConv packages?
The second method is called image_shifting and basically shift every second column of about a half of a hexagon length. This procedure is used by HexagDly and maybe HexaConv(?).