Reaching the finish line Week 9
May 11, 2025
Welcome back to my blog!
After discussing the hyperparameters I have decided to simulate bootstrapping 3 times and compare the accuracy of (0 bootstrapping,1,2, and 3). Now I can finally start my discuss my work on the neural network. Most neural networks end with a final step using a sigmoid function so I can just use my work in logistic regression. However before neural networks use activation functions. The most common activation function is RELU(0 for negative numbers x for positive numbers) however its derivative will need me to decrypt certain values like I needed to do for KNNs. This again poses risks as you don’t want to be sending any plaintext when using homomorphic encryption. Another possible activation function is the tanH function. similar to the sigmoid function it gives positive values 1 and negative values -1.
One might think that we can do a least squares approximation like we did for tanh like we did for sigmoid As they look extremely similar. The tanh function is (e^x-e^(-x))/(e^x+e^(-x)). Just like the sigmoid function we will reach a point where we need to bound the interval and I decided to stick with [-4,4] and using similar method reached a degree 5 polynomail 0.7642x−0.0732x^3+0.0027x^5. I tried to find a degree 3 polynomial but testing without the CKKS approximation was already giving very inaccurate results. I suspect that it won’t be worth using this approximation for tanh and we will unfortunately have to stick with using the RELU function instead as although it will be much faster will also be less secure. A final option could be using a different type of activation function.
The job of an activation function is to introduce non linearity. This is the main reason why we use the RELU function or exponentiation. If none of these prove fruitful, I will look at polynomial activation functions.
Leave a Reply
You must be logged in to post a comment.