Week 5: Refining The Models
April 7, 2023
Hi everyone, and welcome back to my blog! This week I continued to work on the neural network model and refined it.
Before I get into what I did this week, I would like to clarify that I used a 80/20 distribution for my train test split when training my model.
Now moving on to what I did this week:
Fine-Tuning Model Architecture:
In order to try to create the best model with the highest accuracy, I decided to define a helper function that will build the model inside a loop.
Next I put in the architecture parameters to test out. Next I created a loop to loop over different numbers of hidden layers with an inner loop to loop over numbers of neurons per layer. Within the loop, I constructed the model using the helper function I defined earlier, then fit the model using the train set and evaluated it using the validation set.
The highest accuracy I got was 72% with 15 hidden layers and 110 neurons per layer. However the loss was really high too, so I decided to play around with the number of layers and neurons in order to maximize accuracy but minimize loss.
After changing the number of layers and neurons, this is the model that gave me the highest accuracy with the lowest loss.
Here is the highest accuracy and lowest loss of this model:
Neural Network Visualization:
In order to see if this model was in fact better than the one I created earlier (in my last blog) I decided to create a visualization for both models. Here is the visualization for the first model, which had 1 hidden layers:
Here is the visualization for the second model, which has 6 hidden layers:
As you can see, the loss for the second model between the test and validation accuracy is less than the first. Also, the divergence between the two loss lines is less for the second model, so it is not overfitting to the training set.
Thus the final accuracy for my neural network model is 73%.
Next week, I will try a new type of regression called Poisson Regression, which is similar to Logistic Regression, but returns counts instead of just yes or no. Additionally, I would like to create different subsets of the data to run the model on, in order to see if that changes the accuracy of the model.
Thank you for reading, and see you next week!
Sources:
- Open Sourcing Mental Illness, LTD. “OSMI Mental Health In Tech Survey 2016.” Kaggle.Com, 2016, Www.Kaggle.Com/Datasets/Osmi/Mental-Health-In-Tech-2016.