
- OpenAI is reportedly developing an AI music producer that creates songs from prompts.
- Juilliard students are said to be helping annotate musical scores for the project.
- The project could spark rivalry with companies such as Suno and Udio, as well as legal battles with music labels and artists.
OpenAI, the company behind AI tools including the Sora 2 video generator, is perfecting a new AI tool that will create music based on text messages and audio.
According to a report from The Information, OpenAI is working with music students at the prestigious Juilliard School to annotate the scores used to help build and train the model, although the school itself has stated that it is not involved in the project.
If the unnamed, unconfirmed OpenAI project comes to fruition, it would allow users to use words or a short snippet of sound to create new instrumental accompaniments, such as a guitar track, to pair with a vocal recording or produce background music tailored to a specific mood, tempo or visual.
OpenAI has experimented with music AI models before. The company created MuseNet in 2019, which could produce music that matched different styles, but was limited to small MIDI files. Jukebox, which appeared in 2020, produced full vocal tracks to accompany the music it wrote, but it was quite primitive compared to more recent efforts from Suno and other AI music developers.
What OpenAI seems to be working on now would go far beyond those early forays and look more like what OpenAI’s new Sora 2 model and the Sora app for AI-generated videos represent.
The supposed inclusion of Juilliard students in score annotation is an interesting touch and suggests that OpenAI recognizes how, while it’s not unusual for large language models to be trained on massive sets of unstructured data, musical structure is notoriously difficult to teach that way.
Unlike text, where billions of examples can be drawn from, music requires an understanding of harmony, rhythm, instrumentation, and timing—not just what sounds good, but also why. Students could much better teach the AI to “read” music.
AI Battle of the Bands
It seems like the OpenAI music project would put OpenAI in direct competition with tools like Suno, Udio, Google’s Music Sandbox, and other music AI tools. There has been a lot of interest recently in platforms like Suno, and others have advanced in sophistication. But that improvement is accompanied by a lot of disorder.
Streaming platforms are already flooded with AI-generated content, only some of which is properly labeled. Sometimes, those AI tracks are advertised as having been created by real people.
Universal Music Group and Warner Music Group have already filed lawsuits against Suno and Udio for copyright theft. OpenAI’s entry into this space only raises the stakes, especially since OpenAI has its own legal baggage in the form of multiple ongoing disputes over the use of copyrighted content in model training. If it turns out that this new musical model was formed in part from commercial recordings, that could be another powder keg waiting to explode.
Still, the AI-powered music economy is growing faster than regulators and copyright owners can track. The people using these tools are rushing toward a time when half of online music could be generated by AI, but no one agrees on who owns what.
And that’s why OpenAI’s move is important. It’s a bet that music, like text and images, can be made flexible and programmable. It’s a bet that users will want and expect to make music the same way they do Instagram filters or TikTok captions. This doesn’t necessarily mean the end of human-made music, but it does mean that we will have to decide how valuable human-made music is to us.
Follow TechRadar on Google News and add us as a preferred source to receive news, reviews and opinions from our experts in your feeds. Be sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp also.



