Beyond Automation: Generative AI’s Surprising Emotional Resonance

A Reddit post highlighted a paper posted on arxive.org titled “Echoes of Humanity: Exploring the Perceived Humanness of AI Music.” The paper investigates how well humans can tell the difference between music created by generative AI and music created by humans, and what cues people use to make those judgments.

AI songs were taken from real-world users posting their creations on Reddit’s r/SunoAI (not created by the researchers) and then paired with human songs from Jamendo’s MTG-Jamendo dataset. 653 participants compared song pairs (some randomly matched, others highly similar) and asked to identify which one was human-created and which was generated by an AI model.

When the songs were randomly paired, listeners did no better than chance (~53%) when identifying which was which; but when pairs were musically similar, humans did much better (~66%). Additionally, humans with a musical background and younger humans tended to perform better, showing how generative processes can challenge listeners.

The post on Reddit took the results (for the randomly paired songs) as proof that AI has “passed the musical Turing test,” and in the ensuing conversation, someone asked:

honest question, is anybody doing this as anything other than a money grab? aside from a neat tech trick and yet another way to cut the cost of music production in pop music… what is the goal here? are small town shops clambering for their own jingles to make radio commercials or something? podcast bumpers? where is the benefit supposed to be with something like this

Folks provided several answers, but they boiled down to the following:

  • Allows people to create music for homemade movies, social-media reels, local commercials, public-access shows, etc. without incurring much cost
  • Allows people to get exactly the music they want or need in only a few minutes without compromising their “vision”

DJ Kyle

Curious as to what all the fuss was about, I gave SunoAI a spin, exploring its generative AI capabilities. I started with a simple prompt: “create a jazz song about an artificial intelligence generating a jazz song.” Less than a minute later, SunoAI served up “Swingin’ Circuits.”

Here are some sample lyrics:

It’s a Turing test, and it’s feeling blessed,
To scat with a silicon soul.
The trumpet sings, and the algorithm swings,
Like a player who’s found their role.

I’m not going to lie. It’s an enjoyable song, and it gave me four different versions of it, though all were (I guess by definition) formulaic.

A Different Test

Playing around with a musical AI is one thing, but what about seeing if it can communicate a deeply personal emotion? I prompted it with the following: “Create a delta blues song about the pain of watching one’s elderly mother suffer from late stage Parkinson’s disease.” This is a pain I know deeply and well, but I don’t have the skills to express that pain on a guitar, in a singing voice, or in lyrical form. Can an AI model channel that emotion for me such that it brings me some kind of relief?

It failed miserably with the prompt. I received two version, both of which were perfectly representative of the delta blues, but neither of which contained legible lyrics related to my situation. Instead, it was a stereotypical delta-blues male voice singing noises that sounded like mumbled words.

Okay, I’ve used AI long enough to know that not every response is of a decent quality, and it’s true that delta blues singers can be hard to understand, so I tried again, changing it to the Chicago blues.

It did much better.

I got two versions of a song titled “Shakin’ Shadows.” Here are some sample lyrics that feel poignant to me:

Her eyes still shine
Though the body betrays
Memories flicker
Like fireflies in haze
I reach for her
But she slips away

The music itself is slow, mournful, with a smooth bluesy voice and an emotional guitar solo.

On the second version (same lyrics), the opening is a super mournful guitar solo and the tempo is much slower, with the full band coming in heavy on the chorus. It could be a performance from an incredible Chicago blues bar-band, a group that regularly performs covers of Gov’t Mule.

I could easily see myself listening to this AI generated song on the regular and feeling real emotions from it.

AI Isn’t Only About Money

There are plenty of reasons to hate on generative AI: environmental degradation, reinforced biases, the proliferation of AI Slop, a potential decline in critical thinking, job losses in virtually every industry and category, humanity’s potential loss of control over automated systems, the erosion of privacy, the marketing hype, the financial bubble, the corporate control, the devaluing of humanity, etc.

Each of those reasons is valid.

However, there’s an element to generative AI that has nothing to do with replacing workers or collapsing industries. It’s the way generative AI helps ordinary people express things they don’t have the training, the craft, or the time to express on their own.

We need to appreciate when generative AI isn’t replacing labor, but silence.

As a teacher and a colleague, I’ve seen special-education students and unconfident adults use generative AI to successfully communicate their thoughts, emotions, and needs in ways they never would have tried without it. I’ve seen teams in conflict use artificially generated images to focus their attention on objective depictions of their ideas rather than on the personalities around the conference table. I’ve seen verbally-challenged students convert complex emotions into simply words just so they could prompt an image generator to create an illustration that accurately conveys the way they feel.

And now I’ve generated artificial music that expresses some of the grief I feel at my mother’s descent in ways I cannot express on my own.

Generative AI has its (significant) flaws. But it also expands the expressive possibility for non-expert humans by acting as an uncanny collaborator. It may seem strange to feel understood by a machine that feels nothing, but the feeling is real, and that matters.

In all of our discussions about generative AI, we need to remember to look at not only what it replaces, but what and who and how it enables.

Share the Post:

Latest Posts

The Gift

In 2022, Raj Bhakta threatened to hand the former Green Mountain College campus to a religious group if Poultney didn’t give him what he wanted. He’s now doing exactly that, and calling it a gift.

Read More

Split Screen: The Mountain and the School

February 2026 split in two: Days on the mountain with my daughter. A dormant project revived. Old friends. And then, on the last day of the month, bombs. The halves of this split screen are not equal, and I don’t know how to pretend otherwise.

Read More