Implement a simple feedforward neural network using the NumPy library. Start by creating a class that defines the structure of the network, including the number of input, hidden, and output neurons. Then, implement the forward pass of the network, which involves calculating the dot product of the input layer with the weights, passing the result through an activation function, and repeating the process for each hidden layer.
Implement the backpropagation algorithm using the same neural network you built in task 1. Start by calculating the error at the output layer and then use the chain rule to calculate the gradients for each layer. Then, update the weights using gradient descent.
Implement different variants of gradient descent (such as batch, mini-batch and stochastic gradient descent) and compare their performance on a classification task.
Build a simple image classifier using a feedforward neural network and the MNIST dataset. Start by loading the dataset and preprocessing the images. Then, implement the feedforward neural network, train it using backpropagation and gradient descent, and evaluate its performance.
Build a simple natural language processing(NLP) model using a feedforward neural network and a dataset such as IMDB or Reuters. Start by loading the dataset and preprocessing the text. Then, implement the feedforward neural network, train it using backpropagation and gradient descent, and evaluate its performance.
Implement a simple convolutional neural network (CNN) using the NumPy library. Start by creating a class that defines the structure of the network, including the number of input, hidden, and output neurons. Then, implement the forward pass of the network, which involves calculating the dot product of the input layer with the weights, passing the result through an activation function, and repeating the process for each hidden layer.
Implement a simple recurrent neural network (RNN) using the NumPy library. Start by creating a class that defines the structure of the network, including the number of input, hidden, and output neurons. Then, implement the forward pass of the network, which involves calculating the dot product of the input layer with the weights, passing the result through an activation function, and repeating the process for each hidden layer.