Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

momentum update for embeddings #164

Open
ashwinkalyan opened this issue Sep 29, 2015 · 1 comment
Open

momentum update for embeddings #164

ashwinkalyan opened this issue Sep 29, 2015 · 1 comment

Comments

@ashwinkalyan
Copy link

Hello,

I am trying to get an embedding working similar to NNLM example provided. The architecture is as given below:

vs = 150
es = 150
cs = 3
hs = es*cs
-- define model
hu = nn.Sequential()
hu:extend(
  nn.LookupTable(vs,es),
  nn.Collapse(2)
)
hu:add(nn.Linear(hs,hs))
hu:add(nn.Tanh())
hu:add(nn.Linear(hs,vs))
hu:add(nn.LogSoftMax())

I am optimizing this in a naive manner without using the optimizer class provided as:
for i=1,100 do -- #x do
    p = hu:forward(x[i])
    t = y[i]
    err = criterion:forward(p,t)
    g = criterion:backward(p,t)
    hu:backward(x[i],g)
     -- NOT WORKING hu:updateGradParameters(nu)
    hu:updateParameters(lr)
    hu:zeroGradParameters()
end

When I do the backward pass, I get
[torch.DoubleTensor with no dimension]

and hence, I think I am not able to update the grad params with the momentum.

Any suggestions?

@nicholas-leonard
Copy link
Owner

@AshwinKalyanV Hi. What criterion are you using, what is the size of x and y, and what is the full stack trace of the error?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants