You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The current way of generating the graph in graphgen.lua is to rely on the input and self.output of each module. The underlying assumption is that those tensors will be the same for every forward call (i.e., no new tensor is created for both the input and the output, but they are instead reused).
This is not the case for nn.Parallel, as a new tensor is created at every forward pass. The same applies for nn.ParallelTable when the input is a tensor, or modules that allocate a new self.output tensor at every forward pass.
I will try to figure out a way of making the graph generation more robust, without having to define special cases for every module.
The text was updated successfully, but these errors were encountered:
The current way of generating the graph in
graphgen.lua
is to rely on theinput
andself.output
of each module. The underlying assumption is that those tensors will be the same for every forward call (i.e., no new tensor is created for both the input and the output, but they are instead reused).This is not the case for
nn.Parallel
, as a new tensor is created at every forward pass. The same applies fornn.ParallelTable
when the input is a tensor, or modules that allocate a newself.output
tensor at every forward pass.I will try to figure out a way of making the graph generation more robust, without having to define special cases for every module.
The text was updated successfully, but these errors were encountered: