Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pretraining Getting Stuck #7

Open
iNeil77 opened this issue Oct 28, 2018 · 4 comments
Open

Pretraining Getting Stuck #7

iNeil77 opened this issue Oct 28, 2018 · 4 comments

Comments

@iNeil77
Copy link

iNeil77 commented Oct 28, 2018

I am running the pretraining code the way you suggested but it has been stuck at this point for 2 hours now. Is this supposed to take this long?

neilpaul77@NeilRig77:~/Downloads/ntua-slp-semeval2018$ python sentiment2017.py 
/home/neilpaul77/anaconda3/lib/python3.6/site-packages/sklearn/externals/joblib/externals/cloudpickle/cloudpickle.py:47: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
  import imp
Running on:cuda
loading word embeddings...
Loaded word embeddings from cache.
Reading twitter_2018 - 1grams ...
Reading twitter_2018 - 2grams ...
Reading twitter_2018 - 1grams ...
Building word-level datasets...
Loading SEMEVAL_2017_word_train from cache!
Total words: 1435889, Total unks:9700 (0.68%)
Unique words: 45397, Unique unks:3602 (7.93%)
Labels statistics:
{'negative': '18.91%', 'neutral': '45.47%', 'positive': '35.62%'}

Loading SEMEVAL_2017_word_val from cache!
Total words: 75465, Total unks:521 (0.69%)
Unique words: 9191, Unique unks:198 (2.15%)
Labels statistics:
{'negative': '18.91%', 'neutral': '45.46%', 'positive': '35.63%'}

Initializing Embedding layer with pre-trained weights!
ModelWrapper(
  (feature_extractor): FeatureExtractor(
    (embedding): Embed(
      (embedding): Embedding(804871, 310)
      (dropout): Dropout(p=0.1)
      (noise): GaussianNoise (mean=0.0, stddev=0.2)
    )
    (encoder): RNNEncoder(
      (rnn): LSTM(310, 150, num_layers=2, batch_first=True, dropout=0.3, bidirectional=True)
      (drop_rnn): Dropout(p=0.3)
    )
    (attention): SelfAttention(
      (attention): Sequential(
        (0): Linear(in_features=300, out_features=300, bias=True)
        (1): Tanh()
        (2): Dropout(p=0.3)
        (3): Linear(in_features=300, out_features=1, bias=True)
        (4): Tanh()
        (5): Dropout(p=0.3)
      )
      (softmax): Softmax()
    )
  )
  (linear): Linear(in_features=300, out_features=3, bias=True)
)

@cbaziotis
Copy link
Owner

Pretraining should take seconds to minutes, depending on your hardware. Please upgrade ekphrasis and try again:

pip install ekphrasis -U

@iNeil77
Copy link
Author

iNeil77 commented Nov 2, 2018

I have reinstalled ekphrasis and the issue persists

@minkj1992
Copy link

I have same issues, is that right to enter
SEMEVAL_2017's embedding file = "ntua_twitter_affect_310" ?

@agn-7
Copy link
Contributor

agn-7 commented Jan 21, 2020

I had the same problem, the problem is the earlier version of ekphrasis, so you could update this library through pip install -U ekphrasis or you could remove the bounded version of ekphrasis in requirements.txt to get the latest version of that.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants