Week 2: Making the Invisible Visible
March 9, 2026
Last week ended with a question I was genuinely curious about: would the spectrograms produced by a 60 GHz radar contain cleanly separable visual signatures for swing faults I am targeting, or would the differences be so subtle that machine learning would have to do real work to find them? This week, I started building the system that will eventually answer that question. I began the week by designing and writing the code for a data collection script that streams raw signal data from the radar sensor to disk and a signal processing pipeline that transforms that raw data into interpretable visual representations. The key mathematical tool for converting the raw data is the Fourier transform, which converts a signal from the time domain into the frequency domain. By applying a series of these transforms at different stages, I can extract both how far away a moving object is and how fast it is moving, ultimately producing two types of visual outputs: a time-Doppler spectrogram, which shows how the velocity of the golfer’s body evolves over the course of a swing via the Doppler shift, and a range-Doppler map, which adds spatial information about where in the measurement volume that motion is occurring. Getting clean outputs required experimenting with many intermediate parameters such as window sizes, overlap percentages, FFT lengths, clutter removal strategies, and much of the week was spent testing different configurations and inspecting the results by eye.
By the end of the week, I shifted to hardware. I assembled the 60 GHz radar baseboard and began testing different sensor configurations, adjusting parameters such as sampling rate and bandwidth to find a setup with a range resolution that could resolve the small, fast motions that distinguish one swing fault from another. A wider bandwidth of 4 GHz allowed for finer range resolution, with my final configuration achieving a theoretical resolution of about 3.75 centimeters, but it also interacted with other parameters in ways that required careful tuning. When I finally ran the full pipeline on test swings in a controlled environment, the spectrograms correctly displayed a broad trace in one direction as the club moved back, a sharp transition at the top of the swing, and a concentrated burst of motion through impact. Seeing those unique gestures emerge from raw sensor data, before any machine learning was involved, was one of those moments that makes the early weeks of a project feel worthwhile. The open question going into next week is whether the three radar placement angles I plan to test will produce measurably different spectrograms and which one will ultimately give the classifier the most to work with.
Reader Interactions
Comments
Leave a Reply
You must be logged in to post a comment.

Hi Anjali, I’m so happy to see that the time and effort you spent constructing your radar baseboard paid off, with the swings being clear and identifiable in your data! Though we are in very different fields, it reminds me of the satisfaction I feel when the economic trends I hypothesized already appear evident in the data after I collect, clean, and format it even before running the formal tests! I look forward to seeing whether your setup could capture the data necessary to classify different problems in golf swings and provide feedback to the user. Admittedly, I don’t know much about golf, but one thing I was wondering is, would machine learning be able to identify and tackle nicher or smaller faults? Would it be harder to catch in spectrogram data?