Week 0: Entering a World of Opinions
February 6, 2026
Ask someone why people change their minds, and you will get answers ranging from “good arguments” to “group pressure” to “emotions.” Ask how to measure that change, and things suddenly get much more complicated.
Public opinion shapes elections, policy, and social movements. However, the tools we use to study opinion change often lag behind the complexity of the phenomenon itself. Over the past decade, researchers have turned to machine learning (ML) and natural language processing (NLP) to analyze deliberative discussions, structured conversations where people debate, reflect, and sometimes change their views regarding difficult and complex issues. These models are powerful, but there is a major problem: they are hard to reproduce, hard to compare, and even harder to scale.
Welcome to my Senior Project! My name is Rishi Gupta, and over the next ten weeks, I will be stepping beyond individual models and into the infrastructure that supports them. My project, “Building an MLOps Platform for Predicting Opinion Shifts,” focuses on designing a system that makes machine-learning research on deliberation more automated, reproducible, and scalable.
The main question driving this work is simple but ambitious: How can we build machine-learning systems that don’t just produce results, but produce results in an efficient manner that we can trust, repeat, and extend?
I will be working with real deliberation datasets provided by the Deliberative Democracy Lab, combining conversation transcripts with pre- and post-discussion survey data to predict opinion shifts. I already have experience with analyzing such datasets, but this time, I have a new goal in mind. Instead of treating this project as an isolated experiment, my goal is to build an MLOps platform that can handle data preprocessing, feature storage, model training, evaluation, and version tracking. This will be conducted in such a way that allows future researchers to plug in new models and data without rebuilding the entire system from scratch.
Now for the big question that everyone always asks. Why does this even matter?
In fields like computational social science, models are often impressive but fragile. Small datasets, inconsistent preprocessing, and undocumented pipelines make it difficult to tell whether improvements come from better modeling, different experimental choices, or just the specific dataset itself. By focusing on infrastructure rather than just algorithms, this project prioritizes reproducibility throughout the research process, making it easier to compare results across studies and move the field forward in a more systematic way.
This blog will serve as my research log and reflection space. I will document the technical challenges of building scalable ML pipelines, the tradeoffs between model performance and stability, and the insights gained from applying engineering principles to social science research. Expect honest discussions of what works and what doesn’t, as well as reflections on what it means to study human behavior with computational tools.
Opinion change is messy. Human conversations are nuanced, emotional, and unpredictable. My goal is not to oversimplify that complexity, but to build systems robust enough to study it responsibly.
Excited for the project to begin!

Leave a Reply
You must be logged in to post a comment.