View on GitHub

Responses

A. In Maroney’s second lecture video he expands on the dataset ‘fasion MNIST’ as well as the importance of training and testing datsets. When you have a group of data (images), they are split into training and testing sets. We begin by taking a subset of the images and using them to train the neural network we have created (training test). The remaining images are used to see how well the neural network performs on new images. In sum, a subset of the data (training set) is used to train the model on the new neural network, while the rest of the data is used to see how well the model can predict a value!!!

B. There were 3 layers in the neural network used for the fasion MINST example. These 3 layers include flatten, dense (128 neurons), and dense (10 neurons). The flatten layer works by taking a rectangual shape of data (28, 28) and flattening it into a one dimensional array. The last dense layer is made up of 10 neurons because there were 10 classes (or articles of clothing) included in the example. After recieving an output, the relu function looks through the neurons and makes all of the neurons that are less than 0, 0. The softmax function is beneficial in finding the most likely layer (final layer). It looks through the probablities and picks the highest probablity (lower probablites have become 0 thorugh relu).

C. The loss and optimizer function are very important in finding accurate answers. It beings by a guess created by the neural network, which is then sent to the loss function which states how good or bad the guess was. This guess then goes to the optimizer to produce another guess. Following this, the optimizer value will go back to the loss and determine the amount of error/accuracy in that guess. This loop continues to go on: loss to optimizer and so on. Each time the loop runs, a different guess is created. Each time the optimizer makes a guess, the loss usually decreases, making each guess more accurate than before.