Question about 'supervised_mnist' spike trains #654
Replies: 5 comments 3 replies
-
Since the network has an inhibitory layer, the first neuron that spikes inhibits any other neuron in the layer and prevents the other neurons from spiking. There are many ways this can go wrong essentially when using many neurons. For example two or more neurons spike at the same time and send an inhibition signal to all neurons, that can cause massive inhibition. But roughly the architecture holds for scaling up. |
Beta Was this translation helpful? Give feedback.
-
Hi @Hananel-Hazan, I was wondering what would be the best way to reduce the size of the spike trains but still retain important information for training? The average spike frequency for a 200*200 image ( This is why I was asking about splitting the spike trains into bins/batches (but this leads to a complete rewrite of the training and testing logic). Would you have any suggestions on reducing spike trains but still retaining important information? Or perhaps you know of any papers that explain how to do this? |
Beta Was this translation helpful? Give feedback.
-
Normalizing the image is a good step, what about lowering the intensity of the Poisson encoding (assuming you are using it, if not the encoding that you use to convert between integers to a spike train)? |
Beta Was this translation helpful? Give feedback.
-
I'm not sure I understand this... to my understanding, in the diehlandcook2015 model, all the excitatory neurons are connected one-to-one with all of the excitatory neurons, meaning that when the winning neuron fires, all other excitatory neurons are inhibited. How would adding more inhibitory neurons help? Would it not be easier to increase the strength of the inhibitory layer? Also, I have an unrelated question... Consider that the neuron connection with the highest weight "wins", what is the mechanism that prevents it from continuing to win? Logically, the strongest connection would continually increase and continue to win after each iteration. I know that this is not the case, is there any way you could clarify this? Again, I really appreciate your support @Hananel-Hazan. |
Beta Was this translation helpful? Give feedback.
-
Thanks @Hananel-Hazan
What is the mechanism or variable that increases the neuron threshold after spiking? Is it theta_plus or is there another mechanism? |
Beta Was this translation helpful? Give feedback.
-
Hi, I have a quick question about the spike trains in supervised_mnist and eth_mnist using diehlandcook2015.
Suppose we have 28*28 images encoded with Poisson (time = 250, and dt = 1), so the input is 250, 1, 784...
Do each of these 'full' spike trains go to every neuron in the processing layer, and then one neuron is chosen as a winner while the rest are inhibited?
If so, would this not be problematic for larger images, time durations, and, therefore, larger spike trains?
EDIT: For reference, I am using images resized to 200*200, time=500, dt=1. Therefore, the input is 500, 1, 40,000.
Beta Was this translation helpful? Give feedback.
All reactions