Skip to content

DianaDrzikova/BT-DeepLearningForImageStitching

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DEEP LEARNING FOR IMAGE STITCHING

DIANA MAXIMA DRZIKOVA

/src folder

The folder contains modules that were used for training the two experimental neural networks and a file that is used to generate a dataset for image stitching.

The image stitching dataset can be generated with the module dataAugmentation.py. A text file dataset needs to be created with the images that are desired to be augmented.

$ find "folder with images" -type f > dataset.txt

By running the dataAugmentation.py module, the images are augmented by preference. Some of the examples are:

This will create a dataset of synthetic pairs of images.

./python3.10 dataAugmentation.py --split

This command will augment the images themselves by adding noise or rotating them.

./python3.10 dataAugmentation.py --split --blur --rotation

The two experimental neural networks are in folder models.py

The Autoencoder and OriginalHomography classes are used in files for their training and evaluation.

The model OriginalHomography was reconstructed from estimating the homography matrix from SuperGlue matches, so the script cannot be run, it is here just for example, the training process.

The module train_homography_images.py takes input images and the CSV file of ground truth matrixes generated by module homography_labels.py.

The evaluation of both models can be done with evaluate_* modules.

./python3.10 evaluate_homography
./python3.10 evaluate_keypoint_detection.py

The module match_grid.py (inspired by the pair matcher in the original paper) performs matching between images. The mode is by default set to the dataset/evaluation_pipeline folder which contains two examples of stitching. One for traditional two-image stitch and 2x2 grid stitch.

./python3.10 match_grid.py

The default of the grid is set to 2x2, but it can stitch the images even when there are various sizes. It maps the correspondences. The number of rows and columns represents the number of images.

The pipeline reads paired images from a file where each line represents parts of an image that is about to get stitched.

./python3.10 match_grid.py --rows 1 --cols 3

The file match_pair.py (adapted from the original paper https://github.com/magicleap/SuperGluePretrainedNetwork) contains the evaluation.

./python3.10 match_pairs.py --viz

visualizes the matches and provides an accuracy of matches with the ground truth of baseline images.

The implicit tested dataset is augmentedblur and it can be changed as

./python3.10 match_pairs.py --viz --input_dir dataset/datasetA/augmented

The final module is trainSuperPoint.py which implements the fine-tuning process of SuerPoint.

./python3.10 trainSuperPoint.py

Training is done on microscopic images generated for keypoint detection training.

/saved_models folder

This folder contains pre-trained models from keypoint detection and homography estimation.

/datasets

The folder contains datasetA, datasetB, datasetC, datasetD and evaluation_pipeline. Each dataset is explained in the thesis report.

Datasets B, C, D can be found on link: https://drive.google.com/drive/folders/18lEa9SyP-gIkGCWRjQS5RSz2QsgaYTn0?usp=sharing

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages