-
Notifications
You must be signed in to change notification settings - Fork 0
Patrick-devX/TensorFlowDev
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
######### Regression ######### ********* Improving the Model ****** 1. Change the Optimizer: SGD, Adam 2. Change the learning Rate 3. Try more Layers and more neurons 4. Try different Batch Size 5. Look for more Data # Deep Learning Convolutional Network **** Kernel **** In Image Processing. a kernen, convolution matris, or mask is a small matrix used for blurring, sharpening, embossing, edge detection and more. This is accomplisched by doing convolution between the kernel an the image: g(x, y) = kernen*f(x,y) **** Max Pooling **** The MaxPoo2D Layer is designed to compress the image, while maintening the content of the features that where highlighted by the Convolution. By specifying (2,2) for the MaxPooling, the effect is to quarter the size of the image. The idea is that it create a 2x2 array of pixels, an picks the biggest one. Thus, it turns 4 pixels into 1. It repeats this across the image, and in doing so, it halves both the number of horizontal and vertical pixels, effectively reducing the image to 25% of the original image. **** Binary Classification Problem **** lOSS FUNKTION: We have here a binary classification problem, the model will be trained with the binary_crossentropy loss function OPTIMIZER: Using RMSprop is preferable to stochastic gradient descent (SGD), because RMSprop automates learning-rate tuning for us. (Orther optimizers, such as Adam and Adagrad, also automaticaly adapt the learning -rate during training, and would work equally well here) ############# Courses 2: CNN upgraded Lab ##################### C2 Lab1: First review how to build CNNs, prepare your data with ImageDataGenerator and examine the results. 1. Explore the example data of Dogs vs. Cats 2. Build an train a neural network to classify between the two pets 3. Evaluate the training and validation accuracy ############# Courses 2: CNN Data Augmentation on Horses or Humans Dataset ###################### Applying data Augmentation requires good understanding of the dataset ***** ImageDataGenerator ******* ############# Courses 2 W3: Transfer Learning ############# Use of pre-trained model to achieve good results even wiht small training dataset 1. Just get the convolution layers of one model 2. attach some dense layers onto it 3. train just the dense network 4. evaluate the results *** Set up the pretrained model *** 1. Set the input shape to fit your application. 2. Pick and freeze the convolution layers to take advantages of the features it has learned already 3. add dense layers wich you will train ############# Courses 2: MultiClass Classifier ############# ############# Courses 2 Assignment: MultiClass Classifier, Sign Language MNIST ############# The Dataset Sign Language MNIST dataset contains 28x28 images of hands depicting the 26 letters of the english alphabet. Today we will pre-process the data so that it can be fed into a CNN to correctly classify each image as the letter represent. *** Think about data_generation batch_size when training ################ Courses 2: Basic Tokenizer C2-W2-Lab2 ################## Training a binary classified model with the IMDB Dataset in this lab we will build a sentiment mode to distinguish between positive and negative movies reviews. ***** Visualize the trained weights in the Embedding layer to see words that are clustered together. **** ****** GlobalAveragePooling ******* This layer add a task of averaging over the sequence dimension before connecting to the dense layers. E.g. sample_array = np.array([[[10,2], [1,3], [1,1]]] output = tf.keras.layers.GlobalAveragePooling1D()(sample_array) output = [[4 2]] ---> (10+1+1)3 and (2+3+1)/3 ********* C3 W3 Lab1 imdb subwords SINGLE LSTM LAYER ************ Subword Tokenization with the IMDB Reviews Dataset: In this Lab, we will look at the pre-tokenozed dataset that is using subword text encoding. This is an alternative to word-based tokenization which we have been using in the previous lab. Leet see his impact on the model In this lab we saw how subword text encoding can be a robust technique to avoid oov tokens. It can decode uncommon words it has not seen before even with a relatively small vocab size. Consequently, it results in longer token sequences when compared to full word tokenization. ********* C3 W3 Lab2 imdb subwords: Multiple LSTM Layers ************ LSTM layer expects a sequence input so if the prevous layer is also an LSTM, then it should output a sequence as well. The output dimension in in 3 dimensions (batch_size, timesteps, features). ********* C3 W3 Assignment: Exploring Overfitting in NLP ********** The Goal is to put knowledge into practice by creating a model architechture that does not overfit. We will be using a variation of the Sentiment140 dataset, which contains 1.6 millions tweets alongside their respective sentiment (0 for negative and 4 for positive). *********** C3 W4 Lab1: Generating Text with Neural Networks **********
About
No description, website, or topics provided.
Resources
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published