You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have been playing around with your repository for a few days and have noticed that in the NNDMD example, you are using a non-linear encoder and a linear decoder, e.g., [here](https://pykoopman.readthedocs.io/en/master/tutorial_koopman_nndmd_examples.html).
Some part of the _nndmd.py coder seems to explicitly depend on a linear decoder - it actually crashes if I choose a non-linear decoder because the eigenvectors are mapped back using an 'effective linear transformation'.
Is there a specific reason why this is a good choice? It seems counterintuitive that a linear transformation can invert a non-linear one. For example, in the DeepKoopman code by Lusch et al., a non-linear decoder was used, it seemed to me. Can you point me to a theoretical justification (e.g., a paper) for using a linear decoder?
The text was updated successfully, but these errors were encountered:
Hello,
I have been playing around with your repository for a few days and have noticed that in the NNDMD example, you are using a non-linear encoder and a linear decoder, e.g.,
[here](https://pykoopman.readthedocs.io/en/master/tutorial_koopman_nndmd_examples.html)
.Some part of the _nndmd.py coder seems to explicitly depend on a linear decoder - it actually crashes if I choose a non-linear decoder because the eigenvectors are mapped back using an 'effective linear transformation'.
Is there a specific reason why this is a good choice? It seems counterintuitive that a linear transformation can invert a non-linear one. For example, in the DeepKoopman code by Lusch et al., a non-linear decoder was used, it seemed to me. Can you point me to a theoretical justification (e.g., a paper) for using a linear decoder?
The text was updated successfully, but these errors were encountered: