Blog #1 - AI and the Composer!
March 4, 2026
Welcome to my blog! I’m Lawrence Zhou, and let’s look at a tsunami sweeping the music world: the rise of AI-generated music.
Suno, Udio, and other AI music generators have exploded in the past 3 years, and now, anyone can type a prompt and get a complete song in seconds. But a question still lingers: what’s actually different between AI and human composition? Over the next 10-12 weeks, I’m going to find out. I will create a corpus of 10 pieces in one genre (EDM, or electronic dance music), where half of the pieces will be generated by AI (Suno), and the other half composed by humans. Each AI piece will match with a human piece, at minimum, by tempo, key, length, form/structure, and song theme or subject. Then, I’ll analyze them side-by-side in these domains:
1 – Texture. Essentially, it’s how instruments layer/stack and work together to create, more or less, the “feel” of the piece. I know this sounds super wishy-washy and difficult to analyze without a bunch of confounding variables, but I’ll explain it in much more detail in the next blog!
2 – Orchestration. Since instruments are controlled, this will primarily focus on the “leading voice” of the accompaniment.
3 – Form. Since structure is easy to analyze, it will be fixed when generating pieces. After all, you’ve almost certainly heard of all these: Intro, Verse, Pre-Chorus, Chorus, Bridge, Outro. So, what is left to analyze? Much more than I initially expected – range, energy/loudness, durations, transitions, lyrics, rhythmic and motivic repetition, the list goes on and on.
4 – There’s also a strange category of “other parameters” that I will attempt to analyze objectively to the best of my ability. Some of these are very multifaceted, such as contrast, while others are challenging to pinpoint and study without having a larger sample size. These include orchestral, timbral, or other patterns unique to each song. I guess we’ll see in the coming weeks how far I’m able to research into this territory!
If time permits, I’ll expand the genres in which I’m analyzing AI-generated versus human music. Right now, my targets for this additional study are Classical and Jazz.
You might ask, why does this matter? I’ll answer with these questions: if AI can compose music that’s indistinguishable from human work, how could the world of composers, music education, and copyright change? And if there are clear differences, what could that tell us about the uniqueness of human creativity?
By the end of this project, I hope to write a complete 10-12 page analysis comparing these 10+ pieces across the aforementioned musical elements. The final product will also include the full musical corpus with all recordings, as well as spectrograms and analytical charts for each.
Stay tuned!
Reader Interactions
Comments
Leave a Reply
You must be logged in to post a comment.

Great first entry, Lawrence!
Your definitions of those parameters definitely makes these music theory terms accessible to the non-specialist.
I’m looking forward to seeing what patterns emerge from your analysis!
Lueders
Hi Lawrence! I love this project idea! I was wondering, though, will these songs have lyrics, or would you consider adding lyrics? I feel like lyrics could possibly change the way viewers connect with and feel music as a whole.
I’ve always been so impressed by your incredible musical literacy, so I’m very excited to follow your project, learn more about the complexities of music, and see how each component makes a song what it is (especially excited because I love jazz :3). For the pieces composed by humans, will you be using past compositions, or will you also be creating some of your own?