Tool in development to create soundtracks from prompts, building on Jukebox legacy amid rising AI audio competition.
OpenAI is advancing into generative music with a new tool that crafts original tracks from text descriptions or audio clips, according to a The Information report cited by Engadget on October 26, 2025.
The project, still in early stages, envisions applications like adding guitar riffs to vocals or custom scores to videos, potentially integrating with ChatGPT or Sora.
To refine its model, OpenAI has partnered with students from The Juilliard School , who are annotating musical scores for high-quality training data. This collaboration addresses key challenges like capturing nuance and avoiding copyright issues, as noted in WebProNews .
It builds on OpenAI’s 2020 Jukebox project, which generated raw audio in genres like blues, but shifts toward multimodal prompts for more user-friendly creation.
The tool enters a crowded field dominated by startups like Suno and Udio, which have flooded streaming platforms with AI tracks—drawing scrutiny over “slop” content, as seen in the Velvet Sundown parody scandal .
Competitors include Google’s MusicFX and Stability AI’s Stable Audio, but OpenAI’s resources could elevate the space. Features may include multi-vocal generation and AI mixing, appealing to indie creators, per NDTV .
No launch timeline is set, but experts predict integration with Sora for video-audio synergy, as speculated in Mint . Creators can experiment with existing tools like Suno or explore ElevenLabs for voice-music hybrids.
As AI music proliferates—projected to hit $1.5 billion by 2028, per MarketsandMarkets —OpenAI’s entry raises ethical questions on originality and artist rights.
For musicians, this could democratize production; for listeners, it promises personalized soundscapes.