Adobe informed me Firefly can now generate soundtracks, which sounded like exactly the sort of sentence that should be followed by a small fire and a class action lawsuit.
So naturally, I tested it.
I ran four short cues through it: trailer tension, orchestral uplift, prestige-doc montage, and a sorrowful film cue. I’ll drop the prompts in the comments with the audio files.
My verdict?
Disturbingly competant.
Not “replace a real composer” competent. Not “go score a feature and move me to tears” competent. But competent enough to make you stop mid-sip and mutter, well... that’s deeply irritating.
It is lightning fast. Relatively cheap. Very derivative. Occasionally inspiring in the same way a glitchy old synth can suddenly cough up one usable note before going back to being a bin fire.
That surprised me.
Every now and then it spits out a tiny melodic fragment, a harmonic turn, or some odd little textural accident that makes you think, hang on, there’s a seed in there. Not a tree. Not even a branch. A seed. So yes, I can see this thing being useful for temping, sketching, placeholder cues, testing tone, or kicking your own brain loose when it’s being stubborn.
But let’s not start decorating the corpse.
Because the ceiling arrives fast.
First problem: the sound libraries. To my ears, they are not top shelf. Not even close. A lot of it has that faint whiff of 90s General MIDI cosplay, like a Roland Sound Canvas got dragged through a cinematic trailer preset and told to act expensive. The shape is there. The sonics are not.
Second problem: emotion.
This thing can imitate the behaviour of feeling. It can do the gestures. The posture. The raised eyebrow. The meaningful pause. It knows what “sad” tends to sound like. It knows what “tension” usually wears to dinner.
But it doesn’t bleed.
It would almost land an emotional beat for two or three seconds, then slide straight into music-theory-correct wallpaper. Harmonically fine. Structurally plausible. Dramatically dead. The notes line up. The wound does not open.
And that, to me, is still the whole damn game.
Music for picture is not just organised sound. It is pressure. It is subtext. It is the thing under the dialogue telling the audience what the soul of the moment costs. If the cue is only functioning as sonic furniture, then all you’ve done is decorate the room where the scene died.
There’s another catch, too. I’m not seeing any option to save the result as anything except a WAV. No MIDI. No score. No stems that I can find. So if your plan was to drag it into a DAW, swap out the bargain-bin faux orchestra for real libraries, and actually develop the idea like a composer... tough luck. You get the baked potato exactly as served. No recipe. No ingredients. Just gravy and regret.
So where do I land?
Useful? Yes.
Fast? Yes.
Cheap? Yes.
Capable of generating snippets that might prod your imagination? Definately.
A replacement for a composer with taste, scars, life experience, musical judgment, and an actual emotional interior?
Not today.
Probaby not tomorrow either.
What worries me isn’t that this thing is “better than humans.” It isn’t. What worries me is that it’s already good enough for people with tin ears, thin standards, or no budget to say, “Eh, close enough.”
And that’s how craft dies. Not with a bang. With a shrug.
Curious where the rest of you land.
Where is AI soundtrack generation genuinely useful right now?
Where does it still completely fall apart?
And what is the thing a human composer is still doing that this machine simply cannot touch?
Prompts and audio examples below. I gave the machine four swings at the ball. It made contact. It did not, in my opinion, write music.
Heated Rivalry FREE Webinar- March 20th!
Stage 32's FREE Heated Rivalry Script Breakdown Webcast Is An Event You Won't Want To Miss!
LIVE on March 20th at 1:00 pm PT!
Sometimes a show takes an already beloved story and launches it into an entirely new level of cultural impact. That’s exactly what happened with Heated Rivalry.
Adapted from Rachel Reid’s popular Game Changer romance novel series, the story already had a passionate fan base behind it. But when the six-episode series premiered in November 2025, it didn’t just satisfy existing fans. It exploded into a global phenomenon.
And here’s the part that really caught the industry’s attention: The series was produced for roughly $11–12 million, with about 30% of the budget covered through tax incentives.
Now writers, producers, and executives across the industry are asking the same question: What made this show work so well?
On Friday, March 20th at 1:00 PM PT, Stage 32 is hosting a FREE script breakdown webcast where Producer and Development Executive Anna Henry will walk through the pilot episode and unpack the storytelling choices that helped make the series such a success.
Even if you can’t attend live, registering ensures you’ll receive the full recording to watch or listen to anytime.
It’s a great opportunity to look under the hood of a show that proves smart storytelling, strong structure, and a clear understanding of audience can turn a modest production into a global phenomenon.
Don't miss out! Make sure you sign up today!
Copy the link below to share this page:
16 people like this