Week 6: Evaluating Performance and Debugging
April 21, 2025
This week marked a major turning point in my project—transitioning from preparing data and building the initial model to actually testing how well it performs on real market conditions. I spent the first part of the week finalizing my neural network’s architecture, refining how the model processed volatility, technical indicators, and historical price patterns. With the guidance of my external advisor, I experimented with different numbers of hidden layers and activation functions to strike a balance between overfitting and underfitting.
After getting the model to run without errors, I began evaluating its predictions. I compared its performance against baseline strategies like simply holding the S&P 500, and also tracked how often the model made correct directional predictions. While some results were promising, others revealed inconsistencies—especially in volatile market periods. This forced me to go back, tweak some hyperparameters, and pay closer attention to how I split and normalized my training/testing data.
One unexpected challenge came from the way my model handled recent data. At first, it performed worse on newer timeframes. With a bit of debugging and research, I realized that incorporating too many older trends without weighing more recent performance was affecting the output. I’m now working on giving more weight to recent market behavior while keeping long-term trends in context.
Next week, I’ll work on improving prediction accuracy and begin analyzing how this model stacks up against expert recommendations in real-world scenarios. Progress has been steady, and I’m excited to continue fine-tuning and optimizing the system!
Leave a Reply
You must be logged in to post a comment.