You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you very much for the ssm package! I have a question regarding the output of the fitted weights from the ssm.HMM method when I applied it to a human reward-learning task. The task is a 2-step task that humans learn to choose between two options to collect rewards. And the GLM I’m using is really studying how previous outcome affect people’s current choice (i.e. whether they stick to the rewarded option or not). Generally people in this task showed decent reward learning, that is they expressed the behaviors consistent with a positive outcome effect on their current choice of sticking to the rewarded option (outcome: 0 - no reward, 1 - reward). But the output of the fitted weights, which was returned by glmhmm.observations.params seemed to give a negative weight loading on the effect of “outcome” (Figure 1), which seems to be the opposite of the actual behaviors (the loading on the “outcome_transition" seems to be opposite too compared with people’s actual behavior). But later if I use the method glmhmm.observations.calculate_logits to check the model prediction of behaviors in different states, the results all make sense (Figure 2).
Just would like check this with you to see what in the usage of the package might cause the potential problem of weight-sign flipping in the Figure 1 here. I’m attaching the codes I’ve used. Thank you!
Figure 1:
Figure 2:
Code 1: Fit a GLM to have initialized weights for the GLM-HMM
Code 2: Fit the GLM-HMM with the initialized weights from the GLM in Code 1
Best regards,
Weilun
The text was updated successfully, but these errors were encountered:
Gentu-Ding
changed the title
Question on method ssm.HMM (GLM weight-sign flipped)
Question on method ssm.HMM (GLM weight-sign flipped?)
Sep 15, 2023
Gentu-Ding
changed the title
Question on method ssm.HMM (GLM weight-sign flipped?)
Question on method ssm.HMM (GLM weight sign flipped?)
Sep 15, 2023
Gentu-Ding
changed the title
Question on method ssm.HMM (GLM weight sign flipped?)
Question on method ssm.HMM (GLM weight sign)
Sep 16, 2023
Hi @Gentu-Ding, did you get to the bottom of this? I'm curious because I also get negative weights for a feature that I'd expect to be positively associated with the outcome.
Hi there!
Thank you very much for the ssm package! I have a question regarding the output of the fitted weights from the ssm.HMM method when I applied it to a human reward-learning task. The task is a 2-step task that humans learn to choose between two options to collect rewards. And the GLM I’m using is really studying how previous outcome affect people’s current choice (i.e. whether they stick to the rewarded option or not). Generally people in this task showed decent reward learning, that is they expressed the behaviors consistent with a positive outcome effect on their current choice of sticking to the rewarded option (outcome: 0 - no reward, 1 - reward). But the output of the fitted weights, which was returned by glmhmm.observations.params seemed to give a negative weight loading on the effect of “outcome” (Figure 1), which seems to be the opposite of the actual behaviors (the loading on the “outcome_transition" seems to be opposite too compared with people’s actual behavior). But later if I use the method glmhmm.observations.calculate_logits to check the model prediction of behaviors in different states, the results all make sense (Figure 2).
Just would like check this with you to see what in the usage of the package might cause the potential problem of weight-sign flipping in the Figure 1 here. I’m attaching the codes I’ve used. Thank you!
Figure 1:
Figure 2:
Code 1: Fit a GLM to have initialized weights for the GLM-HMM
Code 2: Fit the GLM-HMM with the initialized weights from the GLM in Code 1
Best regards,
Weilun
The text was updated successfully, but these errors were encountered: