Week 10: Capturing Layer Outputs With Forward Hooks
May 12, 2023
Welcome to Week 10 of my Senior Project Blog!
This week I will wrap up my senior project with how I used the forward hooks to extract latent embeddings for the encoder and decoder layer outputs of the Autoencoder CAD models.
To recap, PyTorch hooks are specific functions you can attach to every layer, and get called each time the layer is used as shown in the figure below.
A hook has a predefined signature and can be registered to any neural network module (nn.Module) object. Forward hook allows us to freeze the execution of the forward pass of a specific module and process its inputs and outputs. As we saw in the method signature last week, the forward hook has 3 arguments:
module : Instance of the layer that the hook is attaching to
input : tuple of tensors that are the input to the forward pass of the module
output: tensor that is the output of the forward pass of the module
After defining the hook, we will need to “register” it with the appropriate layer using the register_forward_hook method. Once registered, the hook will be executed right after the forward method without requiring any additional action to trigger it manually.
An example of the processing pipeline and steps to load the models, run the inference and extract the embeddings is outlined below.
- First, we load one of the tile models for the fragment on Chromosome 1 in the loci range 115654976-116021192 and examine the model we get:
- Using the vcf matching code we find the vcfs that we want to run the inferencing model pipeline with this fragment model.
- Once we have the inference input vcf and the model files loaded, we can register the forward hooks and capture the relevant layer outputs as shown in the code below for this model:
In the above code, we first define a template function that outputs a dictionary of all output activations for which we are running the forward hooks. We register each layer with layer names that we want to use the forward hook and capture the output by explicitly calling register_forward_hook(getActivation(<layer_name>)). As we call the model on the new inputs, each registered hook will fire and store the output layer’s tensors in the activations dictionary with the layer names that were passed in.
The output tensors for the first and final encoder and decoder layers in this example are shown below:
From the above, we can observe how the output of different layers can be captured and saved as we run the inference pipeline with new inputs on the trained CAD models.
My work with the forward hooks has provided a scalable method to run processing pipeline code with new inputs on pre-trained models and extract intermediate layer outputs and identified a few avenues for optimization that will be pursued in the future by our lab. My project has demonstrated a robust approach to extracting embeddings with forward hooks while reprocessing the CAD modeling pipeline with new data. Using these extracted embeddings, our lab will be constructing a risk score given an individual’s genomic profile in a future project.
Thank you for reading and following along on my senior project journey over the past few weeks!
Sources:
- Baskar, Nanditha. “Intermediate Activations – the Forward Hook.” Nandita Bhaskhar/ Stanford University, 17 Aug. 2020, https://web.stanford.edu/~nanbhas/blog/forward-hooks-pytorch/.
- “Forward and Backward Function Hooks – NN Package¶.” Nn Package – PyTorch Tutorials 2.0.0+cu117 Documentation, PyTorch , https://pytorch.org/tutorials/beginner/former_torchies/nnft_tutorial.html#forward-and-backward-function-hooks.
- Kathuria, Ayoosh. “Debugging and Visualisation in Pytorch Using Hooks.” Paperspace Blog, 9 Apr. 2021, https://blog.paperspace.com/pytorch-hooks-gradient-clipping-debugging/.
Leave a Reply
You must be logged in to post a comment.