We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hello,
I am trying to get an embedding working similar to NNLM example provided. The architecture is as given below:
vs = 150 es = 150 cs = 3 hs = es*cs -- define model hu = nn.Sequential() hu:extend( nn.LookupTable(vs,es), nn.Collapse(2) ) hu:add(nn.Linear(hs,hs)) hu:add(nn.Tanh()) hu:add(nn.Linear(hs,vs)) hu:add(nn.LogSoftMax()) I am optimizing this in a naive manner without using the optimizer class provided as: for i=1,100 do -- #x do p = hu:forward(x[i]) t = y[i] err = criterion:forward(p,t) g = criterion:backward(p,t) hu:backward(x[i],g) -- NOT WORKING hu:updateGradParameters(nu) hu:updateParameters(lr) hu:zeroGradParameters() end
When I do the backward pass, I get [torch.DoubleTensor with no dimension]
and hence, I think I am not able to update the grad params with the momentum.
Any suggestions?
The text was updated successfully, but these errors were encountered:
@AshwinKalyanV Hi. What criterion are you using, what is the size of x and y, and what is the full stack trace of the error?
Sorry, something went wrong.
No branches or pull requests
Hello,
I am trying to get an embedding working similar to NNLM example provided. The architecture is as given below:
When I do the backward pass, I get
[torch.DoubleTensor with no dimension]
and hence, I think I am not able to update the grad params with the momentum.
Any suggestions?
The text was updated successfully, but these errors were encountered: