Week 0: It’s a Bit... It’s a Qubit... It’s a Computational Cage Match!
January 29, 2026
Hi everyone! My name is Patrick Zhou, and welcome to the very first entry of my Senior Project blog. Over the next few months, I invite you to join me as I dive into the complex and often invisible war between classical computing and the emerging frontier of quantum mechanics. My project, formally titled Comparative Analysis of Error Mitigation for Quantum Systems and Artificial Neural Networks under Additive White Gaussian Noise, is a bit of a mouthful, but the core mission is actually quite simple: I want to find out if a quantum brain is sturdier than a classical one when the world gets messy.
My journey into this high-tech rabbit hole didn’t actually start with a love for physics, but rather through a study of cybersecurity and encryption. It was during a high school class on qubits that I had a sudden, slightly frightening realization: quantum algorithms have the theoretical power to render our existing security measures obsolete. That fear quickly turned into fascination. I needed to understand the future of computation before it arrived, which led me to a summer research program at UCSB where I coded my first quantum circuits using Python and Qiskit. Now, I am systematically expanding that experience to answer a burning question about how these systems handle noise.
In the context of machine learning, noise isn’t just loud sounds; it’s static, corruption, and interference that ruins data. Think of a grainy photo taken in low light or a fuzzy MRI scan. Classical Convolutional Neural Networks (CNNs), the kind of AI currently running on your phone, are great at reading clean data, but they often stumble when the picture gets blurry. My project pits these classical networks against a Quantum Neural Network (QNN). The theory is that quantum properties like superposition (being in two states at once) and entanglement (where parts of the system are linked across space) might allow the QNN to see the big picture better than a classical computer, making it more robust against errors.
To test this, I am designing a computational cage match. I will be building three distinct models: a standard high-resolution CNN, a fair low-resolution CNN, and a hybrid QNN. I will first train all three on the famous MNIST dataset, essentially the Hello World of machine learning consisting of handwritten digits, and then bombard them with Additive White Gaussian Noise. By forcing both the classical and quantum models to look at low-quality, pixelated inputs, I aim to level the playing field. This ensures that if the quantum model wins, it’s not because it had superior data, but because its architecture is genuinely smarter at filling in the gaps.
This matters because the real world is rarely noise-free. From autonomous vehicles driving through rain to financial algorithms parsing corrupted data, we need AI that doesn’t break when conditions aren’t perfect. If my research shows that QNNs are naturally more resilient to noise, it could validate the theoretical benefits of quantum computing for industries like healthcare and defense. I have a long road of coding in Google Colab ahead of me, complete with the challenges of simulating quantum hardware on classical machines, but I am ready to see if the future of AI really is quantum. Stay tuned for the results!
Patrick Zhou, signing off.
Reader Interactions
Comments
Leave a Reply
You must be logged in to post a comment.

Hi Patrick. This work is super exciting! One thing I appreciated was your connection to healthcare at the end and how QNNs offer a solution to understanding noisy data in interdisciplinary work. As a researcher interested in the clinical space, this intersection is extremely crucial for future developments in computational biology. I look forward to following your blogs and hearing about the final results at the Senior Project Symposium!