This repository has been archived by the owner on Sep 1, 2023. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 110
Capacity experiment: setup and validation
chetan51 edited this page Dec 16, 2014
·
2 revisions
- Setup experiment.
- Generate sequence for each world, of length numElements^2. Exhaustive sweep through elements.
- Train temporal memory, two sweeps for each world. TM learning enabled. TP learning disabled. TM # synapses and segments grow until about halfway through training (first sweep). Then stop growing. TP # connections per column stay the same throughout. TM bursting until about halfway through training, then intermittent predictions. TP active columns keep changing, following sweep pattern (repeated sensory input causes the same TP active columns.)
- Train temporal pooler, one sweep for each world. TM and TP learning enabled. TM: all correct predictions, no extra. TM # synapses and segments don't change. At start of each world, TP # connections per column grow for a few steps and then stop (?). As soon as TM correctly predicts, TP changes active cells and stays fixed from then on. TP: Different active cells for different worlds.
- Test, random exploration sequence for each world, of length numElements^2, divided up into 4 separate subsequences (with resets in between). TM and TP learning disabled. TM # synapses and segments don't change. TP # connections per column don't change. As soon as TM correctly predicts, TP changes active cells and stays fixed from then on. TP: Different active cells for different worlds. Stability confusion matrix and distinctness matrix match manual comparisons. Final metrics table matches confusion matrices.