Cosmic-CoNN is an end-to-end solution to help tackle the cosmic ray (CR) detection problem in CCD-captured astronomical images. It includes a deep-learning framework, high-performance CR detection models, a new dataset, and a suite of tools to use to the models, shown in the figure above:
-
LCO CR dataset, a large, diverse cosmic ray dataset that consists of over 4,500 scientific images from Las Cumbres Observatory (LCO) global telescope network's 23 instruments. CRs are labeled accurately and consistently across many diverse observations from various instruments. To the best of our knowledge, this is the largest dataset of its kind.
-
A PyTorch deep-learning framework that trains generic, robust CR detection models for ground- and space-based imaging data, as well as spectroscopic observations.
-
A suite of tools including console commands, a web app, and Python APIs to make deep-learning models easily accessible to astronomers.
Visual inspection of Cosmic-CoNNCR detection results. Detecting CRs in a Gemini GMOS-N 1×1 binning image with our generic ground-imaging
model. The model was trained entirely on LCO data yet all visible CRs in the image stamp are correctly detected regardless of their shapes or sizes.
The Cosmic-CoNN NRES
model detects CRs over the spectrum robustly on a LCO NRES spectroscopic image. The horizontal bands in the left image are the spectroscopic orders, which are left out of the CR mask.
We recently added optional dependencies install for pip.
We recommend installing Cosmic-CoNN in a new virtual environment, see the step-by-step installation guide. To get a ~10x speed-up with GPU acceleration, see Install for a CUDA-enabled GPU.
# basic install for CR detection or library integration
$ pip install cosmic-conn
# include Flask to use the interactive tool
$ pip install "cosmic-conn[webapp]"
# install all dependencies for development
$ pip install "cosmic-conn[develop]"
After installation, you can batch process FITS files for CR detection from the terminal:
$ cosmic-conn -m ground_imaging -e SCI -i input_dir
-m
or --model
specifies the CR detection model. "ground_imaging"
is loaded by default, "NRES"
is the spectroscopic model for LCO NRES instruments. You can also download a Hubble Space Telescope model trained by deepCR and pass in the model's path.
-i
or --input
specifies the input file or directory.
-e
or --ext
defines which FITS extension to read image data, by default we read the first valid image array in the order of hdul[0] -> hdul[1] -> hdul['SCI']
unless user specify an extension name.
See documentation for the complete user guide.
It is also easy to integrate Cosmic-CoNN CR detection into your data workflow. Let image
be a two-dimensional float32 numpy
array of any size:
from cosmic_conn import init_model
# initialize a Cosmic-CoNN model
cr_model = init_model("ground_imaging")
# the model outputs a CR probability map in np.float32
cr_prob = cr_model.detect_cr(image)
# convert the probability map to a boolean mask with a 0.5 threshold
cr_mask = cr_prob > 0.5
$ cosmic-conn -am ground_imaging -e SCI
The Cosmic-CoNN web app automatically finds large CRs for close inspection. It supports live CR mask visualization and editing and is especially useful to find the suitable thresholds for different types of observations. We are working on addding the paintbrush tool for pixel-level manual editing.
The Cosmic-CoNN web app interface.
See documentation for the developer guide on using LCO CR dataset, data reduction, and model training.
This repository is part of our Cosmic-CoNN research paper. Our methods and a thorough evaluation of models' performance are available in the paper. If you used the Cosmic-CoNN or the LCO CR dataset for your research, please cite our paper: The Astrophysical Journal, NASA ADS
@article{Xu_2023,
doi = {10.3847/1538-4357/ac9d91},
url = {https://dx.doi.org/10.3847/1538-4357/ac9d91},
year = {2023},
month = {jan},
publisher = {The American Astronomical Society},
volume = {942},
number = {2},
pages = {73},
author = {Chengyuan Xu and Curtis McCully and Boning Dong and D. Andrew Howell and Pradeep Sen},
title = {Cosmic-CoNN: A Cosmic-Ray Detection Deep-learning Framework, Data Set, and Toolkit},
journal = {The Astrophysical Journal},
}
Please also cite the LCO CR dataset if you used the Cosmic-CoNN ground_imaging
model or the data in your research:
@dataset{xu_chengyuan_2021_5034763,
author = {Xu, Chengyuan and
McCully, Curtis and
Dong, Boning and
Howell, D. Andrew and
Sen, Pradeep},
title = {Cosmic-CoNN LCO CR Dataset},
month = jun,
year = 2021,
publisher = {Zenodo},
version = {0.1.0},
doi = {10.5281/zenodo.5034763},
url = {https://doi.org/10.5281/zenodo.5034763}
}
Interactive Segmentation and Visualization for Tiny Objects in Multi-megapixel Images
CVPR 2022
@InProceedings{Xu_2022_CVPR,
author = {Xu, Chengyuan and Dong, Boning and Stier, Noah and McCully, Curtis and Howell, D. Andrew and Sen, Pradeep and H\"ollerer, Tobias},
title = {Interactive Segmentation and Visualization for Tiny Objects in Multi-Megapixel Images},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {21447-21452}
}
This package was created with Cookiecutter and the audreyr/cookiecutter-pypackage project template.