Code for the preprint: Pre-training artificial neural networks with spontaneous retinal activity improves motion prediction in natural scenes
In this repository, we provide the code used to generate the results we present in our preprint:
We provide scripts for generating the data, training the artificial neural networks (ANNs), and evaluating the performance and characteristics of the trained ANNs.
We provide the code to generate the following datasets:
- A dataset of natural scenes with prominent motion, specifically a virtual maze simulation. This dataset was generated using the 3D animation software Blender and the Python API of Blender.
- A dataset of spontaneous retinal activity. This dataset was generated based on the model introduced by Teh et al. (2023).
All generated datasets are available via Zenodo.
We provide the code to train and evaluate the ANN models designed for the task of Next Frame Prediction. We implemented convolutional recurrent neural networks, however, we focused on building highly modular models, for instance by supporting different types of recurrent layers, such as LSTM, GRU, and vanilla RNN.