Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Evaluating loops in the network #56

Open
dpellegr opened this issue Jan 11, 2022 · 4 comments
Open

Evaluating loops in the network #56

dpellegr opened this issue Jan 11, 2022 · 4 comments

Comments

@dpellegr
Copy link

Hi,

thank you for the inspiring video and the nicely written source code which I am thoroughly studying.

I am having issues understanding how to proper evaluate a network with no layers which therefore allows for circular loops.
The simplest example is in this frame from your video:

Screenshot from 2022-01-11 18-38-01

where to update the value of N1 I would need the value of N0 and to update N0 I need N1. These loops may involve many neurons and be very hard to spot and disentangle.

The only simple way out that I see is by initializing all the neurons to zero and update them using their values in the previous time step. But I am worried that this will make the network very "slow" in the sense that propagating information from sensors to actuators may take a very long number of steps. Is this how it actually works?

Many thanks for your clarifications!

@davidrmiller
Copy link
Owner

Hi @dpellegr , you're thinking in the right direction -- each neuron's output gets updated exactly once during each simulator step and its output is then latched and persists until the next sim step. A neuron's inputs are taken from the latched outputs of the other neurons. That means that depending on the order in which they get evaluated, an input value to a neuron might be another neuron's output value latched earlier in the same sim step or latched during the previous sim step. I imagine that feedback loops can could act as state machines or oscillate over a number of sim steps. Also see related discussions in #47 and #18.

@dpellegr
Copy link
Author

Thank you for your answer!

This point is still a bit perplexing for me:

depending on the order in which they get evaluated, an input value to a neuron might be another neuron's output value latched earlier in the same sim step or latched during the previous sim step.

I would have thought about doing the stepping process by creating a const copy of the neurons (with the new inputs) and updating from it, so that the time steps remain fully separated.
The fact that the current time step is mixed with the previous, albeit in a consistent way, sounds a bit strange to me. But maybe everything is just absorbed back by the evolution process and does not make any difference in the end, so one can just pick the fastest implementation.

@davidrmiller
Copy link
Owner

But maybe everything is just absorbed back by the evolution process and does not make any difference in the end, so one can just pick the fastest implementation.

You expressed that better than I could have!

@JohnMasen
Copy link

I would call this "frame", the input is from last frame for self-link.
This creates a "short-term memory system". If we add an action item which dumps the values of a certain chain to a persistent storage and a source which restores value from it, are we creating a long-term memory system?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants