Sidewalks are supposed to provide a safe, environmentally-friendly means for pedestrians to move about the city landscape. As such, for people with disabilities, poor sidewalk quality can have a detrimental impact on independence, quality of life, and overall physical activity. Unfortunately, sidewalk accessibility issues are extremely common, but in recent years there has been much progress in collecting data on these issues and making it transparent/available. Most of this data is crowdsourced, and only recently have people begun investigating the application of machine learning in automatically performing these data collection/analyses. One region of particular interest is the application of AI assistance in the validation of crowdsourced accessibility labels, whether it be automatic or in collaboration with human validation. Manual validation is a laborious task, but has immense applications in data quality and also in increasing the amount of data that can be brought to governmental bodies who can take action in improving sidewalk infrastructure (consider this resource). As a result, alleviating the labor that accompanies validation is critical.
This repository is used to experiment with training different models on panorama images. These different experiments can be visualized, creating a clear overview of the performence of a certain model. These visualizations can be used to compare different models, to see what model is the best fit for a certain dataset.
This repository is a fork from the sidewalk-cv-2021repo, to which I added my own experiments. These experiments come in the form of bash scripts. In these scripts, python procedures are called to train, test, evaluate,... on different datasets.
More information about the subject, method of work, and previous experiments can be found in the original README. To start using this repo, follow the guide from here. This guide should give a clear and suffisant overview of the different python scripts, and how to use them in the experiments. There are a couple of things to keep in mind however:
- To run the experiments, you have to ask Michael Duan for 2 files which you have to put in the models folder: block_c_10classes.pth and hrnetv2_w48_imagenet_pretrained.pth , which contain the relevant weights for some of the experiments.
- I had a lot of issues trying to run these experiments on a Windows machine. Try using a Linux environment instead.
- You can try different batch sizes in train.py to optimize the training time for your machine.
- I have asked Mikey Saugstad for a SFTP key, but this feature is still a work in progress. The panoramas have to be downloaded from a different repository.