-
-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Random genomes and add link mutations will link output nodes to other nodes #57
Comments
Hello! The link from output link to the hidden node our to itself can serve as a feedback channel which can be very useful for certain tasks. Furthermore, it was experimentally proven to create successful solutions. Many natural physical systems incorporate conception of direct or implicit feedback. Error backpropagation in deep learning also can be considered as a feedback. You can take a look at my experiments related to maze navigation problem at NEAT with Novelty Search. In this experiment, the evolved successful solvers tend to create topologies with recurrent links from the output nodes (direct or through hidden nodes). Such topologies have very good rational explanations based on the particulars of task to be solved. However, I would be glad to know if preventing such recurrent links can boost performance in other types of tasks. Cheers! |
Hello! Thank you for sharing your findings and ideas. It is greatly appreciated and will help to evolve the goNEAT library further. I've found that loops that you mentioned tend to make general phenotype structure simpler by encapsulating idea of correlation between nodes in the loop as well as to provide some kind on memory for continuous tasks. Almost all successful solutions which was found in my experiments have some kind of loops including loops in the output nodes. Thus, I'm not sure that such behavior of the goNEAT library is wrong. Nevertheless, I may assume that for other specific tasks it can be beneficial to forbid loops. I'll keep your proposal in my list of future improvements. Meantime, I'm working on performance improvements and some public API changes which I'm planning to release soon as v4 of the library. Thank you for participation. |
I have found great paper that may be interesting in regard to problems you are trying to solve. I saw in phenotype scheme you provided that most of the inputs actually not connected at all, i.e., the sensory inputs are effectively ignored. Such topology can be a result of evolution exploiting loophole in the fitness function or experimental data structure. As it is mentioned in the paper: Anyway, the paper has a lot of interesting facts to learn. |
Hi! I'll take a look!
It is interesting and I have came to some if the same conclusions.
Although not desired competitive solutions like I wanted, I found one
particular case where I had a bug inserting no data into the sensors
neurons, so they were receiving only the activating values for 0, so a lot
of .5 sensor values going into the network basically with sigmoid. The
loops at the output layer with a typology of basically one input node that
was only acting like a bias of .5 resulted in one species that was able to
exploit the structure and timings to provide the agent with some random
speed and angle changes to be enough to stabilize the agents so they
stopped dying and circumstantially find enough energy input conditions.
Impressive but no emergent clever behaviors that drives competition between
species.
I really appreciate your time and thoughts and looking forward to v4. In my
personal branch I modified the detection loop for recursive recurrent loop
escaping using a lookup table (so more memory) instead of a bounded
runtime. Helped me out a lot, feel free to reuse it if you like.
…On Mon, Aug 1, 2022, 12:59 PM Iaroslav Omelianenko ***@***.***> wrote:
I have found great paper that may be interesting in regard to problems you
are trying to solve.
The Surprising Creativity of Digital Evolution: A Collection of Anecdotes
from the Evolutionary Computation and Artificial Life Research Communities
<http://www.mitpressjournals.org/doi/pdf/10.1162/artl_a_00319>
I saw in phenotype scheme you provided that most of the inputs actually
not connected at all, i.e., the sensory inputs are effectively ignored.
Such topology can be a result of evolution exploiting loophole in the
fitness function or experimental data structure. As it is mentioned in the
paper:
'''The researchers were surprised to find that high-performing neural
networks evolved that contained nearly no connections or internal neurons:
Even most of the sensory input was ignored. The networks seemed to learn
associations without even receiving the necessary stimuli, as if a blind
person could identify poisonous mushrooms by color. A closer analysis
revealed the secret to their strange performance: Rather than actually
learning which objects are poisonous, the networks learned to exploit a
pattern in how objects were presented.'''
Anyway, the paper has a lot of interesting facts to learn.
—
Reply to this email directly, view it on GitHub
<#57 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AUV7QBYIRE2KLQMLASGHOBTVXAM7XANCNFSM53IR6TFA>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Hi! Thank you for sharing your findings. It is really helpful for making this library better. I'll take a look at optimizations you implemented. |
Upon inspecting a new random population with a larger set of max nodes I discovered that links using output nodes as input nodes to any non sensor, including other output nodes, may occur. This can also occur as far as I can tell with the add link mutations.
In most topologies, including NEAT, I think an output node should be an end state unless it is a recurrent connection back to another non-output node and never with itself. If my assumption is incorrect could you point me to a paper that defines the expected constraints on the node types? What I have read so far seems to imply constraints but doesn't really speak about it or explores the benifits of such constraints on an output node.
These type of connections are likely weeded out in normal experiments but I speculate their existence very much complicated and increases the problem domain to search and degrades the quality or consistency of the network actions. Especially in cases that I've noticed where many outputs will directly activate into another output node. In my model of asexual reproduction it is much less likely to lose an output link like this once they occur and the initial populations will randomly have many of them.
I think it is an issue this implementation provides non recurrent link genes from output nodes to hidden, output, or even itself. But I don't really know.
Perhaps it should be a configurable option to not form nonrecurrent output links, especially for mutations as pruning the new mutations at runtime from client code greatly impacts runtime in the realtime simulation use cases.
The text was updated successfully, but these errors were encountered: