Week 1: Introduction
February 27, 2026
Hi everyone! I’m Yujie, and welcome to my Senior Project Blog! In this post, I’ll introduce you to my project, its background, and my goals.
Background
This idea was inspired by my experience in the MIT Beaverworks Build a CubeSat challenge, where I designed a mini-satellite prototype to power outages. I became interested in seeing if I can use remote sensing to detect disasters that are more complex to analyze, such as earthquakes, which was an idea that we explored as a team but ultimately didn’t choose to pursue, since the shapes of the buildings are more difficult to consider, rather than just comparing brightness values of cities at night.
Throughout that project, I had doubted whether using satellite images for disaster detection for big cities would be worthwhile, thinking they had the infrastructure to support it. It was after hearing from my mom about the 2008 Sichuan earthquake that I realized its value. The Sichuan earthquake had no news coverage about its extent except that the earthquake happened until several days after the initial event, as rescue operations by foot were slow and most internet communication infrastructure were likely destroyed.
Prior Knowledge
The traditional, and still common, method for damage assessment is relying on emergency crews and volunteers to assess damage on foot, a process that is not only dangerous for the people involved but also slow, taking up to 24 to 49 hours. Thus, there is an increasing amount of research to use remote sensing to assess damage faster. Remote sensing solutions, such as satellite images, means there will be constant surveillance over an area, so rescue operation teams can assess damage and inform rescuers before they explore on site, especially when communication is cut by the earthquake. Machine learning models automate this process.
There has been some success using CNNs (Convolutional Neural Networks) for damage assessment, but recently newer segmentation models like U-Net, DeepLabV3+, and SegFormer have been developed for segmentation tasks. These models are end-to-end, requiring fewer steps in between to classify superpixels, only needing to create one mask that labels each area in a picture. These are the models that I will be using throughout this project.
Goals & Closing Thoughts
By the end of this Senior Project, I hope to gain more insight into the architecture of segmentation models, and how they learn. I will analyze the features they pick up on and understand each models’ strengths and weaknesses to know how to optimize them.
Citations
Ritwik Gupta, Richard Hosfelt, Sandra Sajeev, Nirav Patel, Bryce Goodman, Jigar Doshi, Eric Heim, Howie Choset, Matthew Gaston, 2019, xBD: A Dataset for Assessing Building Damage from Satellite Imagery, arXiv, https://arxiv.org/abs/1911.09296
Yuanzhao Qing, Dongping Ming, Qi Wen, Qihao Weng, Lu Xu, Yangyang Chen, Yi Zhang, Beichen Zeng, 2022, Operational earthquake-induced building damage assessment using CNN-based direct remote sensing change detection on superpixel level, International Journal of Applied Earth Observation and Geoinformation, Volume 112, 102899, ISSN 1569-8432, https://doi.org/10.1016/j.jag.2022.102899.
Reader Interactions
Comments
Leave a Reply
You must be logged in to post a comment.

Hi Yujie. I look forward to following your project over the next 10ish weeks. The background seems very well researched, and your direction looks clear. I have a few questions/suggestions to point out. I’ve used the ResNet model in the past to process satellite imagery, and I know there’s a model called the ResUNet model, which combines beneficial aspects of both the ResNet and UNet models, the latter of which you mentioned in this post. If you have the time, I think you should look into these two models and consider whether they may be appropriate for this task. Looking forward your next blog post!
Hi Anav, thanks for visiting my blog! I’ll definitely look into ResNet or ResUNet if I have time, if not just to learn more about it. From your experience, what tasks do you commonly use ResNet for and what advantages does it have?
The task that I had originally was to classify farmland vs non-farmland (cities, roads, etc). The model was pretty good at extracting spatial features of satellite imagery and classifying larger image tiles. As far as I know, UNet is better at pixel-level segmentation based on its architecture, and ResNet is better for larger areas.
Very insightful, thank you!
Hi Yujie! Just read through your blog and thought your project sounded very interesting and relevant. I’m excited to see where your project goes over the next several weeks, as the results could be extremely beneficial in identifying natural disasters and creating a more efficient rescue system. I was just wondering what you mean by “segmentation models,” and I was also interested in whether you’d be looking into how to integrate these results with rescue teams (like how would they use these images and results to determine the best route to get around damaged areas). Looking forward to learning more! 🙂
Hi Yujie!
Your project sounds so cool and I love the thought put behind the background of this project with including the information regarding the Sichuan earthquake and how it was a developed interest from your summer program. I’m just curious on how the experimenting process will work. Will you be using past data and training a model based on this data to predict a later year’s earthquake data and matching it with the actual data to see if it is accurate? Excited to see where your research goes!
Hi Elaine! Thank you! I will be using past data of earthquakes (Though it won’t be of Sichuan specifically) before and after the event. I’m not training a model to predict earthquakes, but rather assessing the damage of an earthquake that already happened. This is why I’m using a segmentation model, which will classify each pixel of an image as a part of a damaged building, undamaged building, or background. I go into more detail in my methdology in Blogs 2 and 3, so feel free to check that out too!
Hi Yujie! Really cool project! The connection to the Sichuan earthquake is super interesting and makes the motive super clear so when communication infrastructure is down or damaged, having an automated aerial assessment pipeline could save lives or help out. One thing that I am curious about is how the time delay affects this idea? Like, if a satellite only passes over the affected area every few hours, does that time period matter more than the model’s inference speed? Also how does the model react with physical disturbances like harsh weather or dust etc. Looking forward to seeing more!