-
Notifications
You must be signed in to change notification settings - Fork 1
/
Copy pathlstm notes
39 lines (32 loc) · 1.56 KB
/
lstm notes
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
lstm input
input() is used to convert 1D array to 3D array . LSTM accepts 3 dimensional array rather than 1D input
3 parameters
Samples. One sequence is one sample. A batch is comprised of one or more samples.
Time Steps. One time step is one point of observation in the sample.
Features. One feature is one observation at a time step.
https://machinelearningmastery.com/reshape-input-data-long-short-term-memory-networks-keras/
Keras python LSTM (sequential modeling)
https://www.liip.ch/en/blog/sentiment-detection-with-keras-word-embeddings-and-lstm-deep-learning-networks
https://www.youtube.com/watch?v=8h8Z_pKyifM&list=PL1w8k37X_6L9s6pcqz4rAIEYZtF6zKjUE&index=5
keras embbeding layer-vector represatation of words
create embedings on the fly- it take 3 arguments vocab size,embeding dimension, max len
https://github.com/yashugupta786/Keras_layers_detail/blob/master/keras_embeddings_layer.ipynb
https://github.com/yashugupta786/keras_sentiment
https://github.com/yashugupta786/Keras_lstm_text_classification
we get embbeding from keras embeding layer on the fly
it will generate embedings for all the documents with dimension of shape (maxlenth* embedingdimension)
Input and output shape of keras lstm
input_shape takes 3 parameters
batch_size- optional
timesteps - no of words suppose
features we have (rows)
model.add(lstm(64)
if we have input shape is (10,4,8)
which means
10 records
4 is max len
8 is embeding size here
now after applying the lstm
10*64
https://www.youtube.com/watch?v=CcGf_Uo7NMw&list=PL1w8k37X_6L9s6pcqz4rAIEYZtF6zKjUE&index=6
kaggle.com/nafisur/keras-lstm