Blog Post 2 - Week 2
March 17, 2026
This week, I finalized my improved experiments.
Experiment 1: Originally, what I did was compute the probability that a given neuron “changes its mind” between a perturbed and non-perturbed system. The issue with that is that it doesn’t exactly pin-point the reason behind this change. Here’s my improved procedure:
The Finite-Time Lyapunov Exponent (FTLE) is a numerical constant that quantifies how chaotic a system behaves. If the FTLE > 0, small changes amplify into large ones. If the FTLE is approximately 0, the system remains static. Likewise, when the FTLE < 0, it acts as a chaotic attractor, but that’s out of the scope of this project. The point is, this is a much better approach at quantification than I used before. Secondly, I am also turning off noise for experiment 1. I can add it back later for biological realism, but I have to prove the system is chaotic first. Next, I will measure the flip probability as before, but this time, I’ll be comparing it and looking for correlation with the FTLE. The critical question behind this experiment is whether or not divergence and chaos (the FTLE) correlate to crossing a decision boundary.
Experiment 2: Before, this experiment quantified how the trajectories of two different systems diverged. The issue with this is that predictability and unpredictability do not equate to agency, which requires control. For example, a system can be predictable and uncontrollable (a huge boulder rolling down a hill) or unpredictable and uncontrollable (static noise). Neither exhibit agency. So, the question for my neural network becomes not “can I control the system” but rather “how long can I control the system for before it devolves into chaos”? In other words, is the control horizon (the time window where inputs matter) longer than the decision time (how long it takes to act)? If not, the system can never be reasons-responsive, or not entirely at least. So, how will I approach this?
I plan to do the following: similarly to before, I will run the system once with no external input (only background noise) and get the decision readout from the network. I will then run a separate identical simulation except with a structured input pulse at time t, recording the decision readout from this network as well. I’ll then systematically delay t to see how how it effects the decision readout. I’m not trying to figure out if control decays over time, as that mathematically has to be the case. Rather, I’m trying to figure out how much unpredictability we can have in a system given a certain decision window.
Experiment 3: Previously, this was my biased input reasons-responsiveness test. This logic, however, was fundamentally flawed. Just like how a coin biased to always land on heads or a dice biased to always roll six doesn’t exhibit reasons-responsiveness, neither can my system. In other words, my system may be responsive to input, but it cannot rationally process given information. Agency requires a differentiation between noise and functional signals. Thus, here is my revised method:
As said before, we need to be able to prove my network can take information and actually process it. So, instead of putting a biased input at time t, I’ll feed the system a long string of evidence that is both partially coherent and partially noise. I’ll vary the ratio between the two across trials, ranging from a string of noise to a pure signal. I’ll also be sure to keep noise active. After that, I’ll plot the probability of choosing a certain decision against the coherence level of my input, which is also called a psychometric curve. If the slope of this curve >> 0, then the network exhibits rational behavior: stronger evidence means stronger commitment. Additionally, to mathematically show correlation and disprove false positives, I’ll be using a Mutual Information Score. This quantified by how much information you can get about the input from the result.
Finally, for experiment 4, I’m increasing the scope of the heat diagram. Like before, I’m changing both gain and noise values in the network. However, this time, I’ll be measuring the dependent variables of the 3 previous experiments. This includes quantifying the Lyapunov exponent, responsiveness window, Mutual Information Score, psychometric curve slope, and flexibility in experiment 3. I’m still deciding on how to weigh each of these results.
In conclusion, I’ve made a ton of progress beyond my old code this week. While I haven’t fully implemented this, I’ll spend this week working and may have some new results next week.

Leave a Reply
You must be logged in to post a comment.