-
Notifications
You must be signed in to change notification settings - Fork 44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hexagonal convolution #88
Comments
Hi! I would like to learn more about and pick up this issue on hexagonal convolutions. I have worked with Tensorflow and Keras before. How can I begin contributing to this project? Thanks! |
Hello, I am interested in this project. I have previous experience working with convolution. Please guide me for a good entry point to contribute to this project. |
I am interested in this project. How can I get started? |
Hi @shikharras, @parthpm, @ShreyanshTripathi! Many thanks for you interest in CTLearn and this issue in particular. Our recommendation would be to get to know our code by installing CTLearn on your system and read through it and, if you are already familiar with convolutional neural networks, check out the couple of packages @TjarkMiener recommends above. |
Hello, I am interested to work on this project. I have prior experience in deep learning and would like to start contributing to this issue. |
Dear @shikharras @ShreyanshTripathi @parthpm @gremlin97, |
Hello @TjarkMiener, I noticed that in the 'image shifting' section in the 'image_mapping.py', we have shifted the alternate columns by 1 without checking if they are in the required form as shown here: Should our script to test the code contain images which are not aligned in this particular way, or is the input to our CNN always in the correct form? Thank you for the help. |
That's a good point you are raising, @shikharras! We are reading the pixel positions of the IACTs from the fits file in "ctlearn/ctlearn/pixel_pos_files/", which originate from ctapipe-extra. These fits files also contain rotation information. While reading the pixel positions into CTLearn, we already make sure to perform the right rotation and therefore the pixel positions have the required form above. So you don't need to add this check in your script! BTW @shikharras @ShreyanshTripathi @parthpm @gremlin97, you can actually test your script with a hdf5 file of magic events. This file contains 10 dummy events for the MAGIC telescope. You need to import hdf5 and then obtain the image charge values. Be aware of selecting the "MAGICCam" in the ImageMapper. You can also select different conversion methods to see the differences. |
Thanks for your response @TjarkMiener! I have written a script similar to 'test_image_mapper.ipynb' for the hdf5 file of magic events using the However, in the beginning, I was trying to use the HDF5DataLoader function from ctlearn.data_loading and that threw an error while accessing the
instead of using the f.root.Array_Info.iterrows() given in the HDF5DataLoader function here:
So is the solution to this problem to rename the group to Array_Info before using the HDF5DataLoader, or should the code in the above function be changed to retrieve the correct group name? Thank you for your help. |
@TjarkMiener As per your suggestion, I wrote a script to test the images of magic events. In that, I selected an image charge value and plotted it before pre-processing with image_shifting algorithm. |
Great that you were able to display the magic events, @shikharras and @h3li05369! @shikharras Using the @h3li05369 The event that you are showing in your first two plots look good to me! This is exactly what I meant with the task! Awesome that you went one step further and get familiar with the usage of hexagdly. I haven't study this package in detail, so could please explain me your four plots in more detail? Thanks! |
@TjarkMiener I hope that I've explained it clearly. |
@h3li05369 Thanks! At the moment there is no need to start converting this package! All interested student should focus on their GSoC application. Don't hesitate to ask for comments and suggestions. |
Thank you for replying. It keeps my motivation high 😃. |
Where can I find the dataset to train the models? The link provided in the readme section is asking for authorisation. I need the data to run the models on my system. |
@h3li05369 For the time being, we aren't allow to share CTA private data with you or any non CTA member. We haven't found a solution for this problem yet. A workaround here would be that you fork the CTLearn project, make your changes and then I could set up some runs for you on our gpus. However, for now the application has the highest priority. |
Hi all @shikharras @ShreyanshTripathi @parthpm @h3li05369 @gremlin97, My email is [email protected] in case you want to have some feedback for your application. Cheers! |
Hi everybody, I am a graduate in physics from the Complutense University of Madrid. |
Hi, I am interested in this project and I am a Computer Science student. Can someone help me get started with making contributions to this project? |
There are mainly two ways of dealing with raw IACT images captured by cameras made of hexagonal lattices of photo-multipliers. You can either transform the hexagonal camera pixels to square image pixels #56 or you can modify your convolution and pooling methods. There are different packages like IndexedConv and HexagDLy, which has been shown an improvement in performance.
The text was updated successfully, but these errors were encountered: