- Google Deepmind has improved and expanded access to its Ai Sandbox music
- Sandbox now includes the Lyria 2 model and the real -time characteristics to generate, extend and edit music
- Music is marked with Synthid
Google Deepmind has brought some new and improved sounds to its Music Ai Sandbox, which, although Sand is notoriously bad for musical instruments, is where Google houses experimental tools to establish clues with the help of AI models. The Sandbox now offers the new Lyria 2 AI model and the music production tools of Lyria Realtime AI.
Google has launched the Music Ai Sandbox as a way of provoking ideas, generating sound landscapes and perhaps finally helping to finish that average written vessel that has been avoiding seeing all year. Sandbox is mainly aimed at professional musical artists and producers, and access has been quite restricted since its debut in 2023. But, Google is now opening the platform to many more people in musical production, including those who seek to create sound bands for movies and games.
The new Lyria 2 AI music is the rhythm section underlying the new sandbox. The model is trained to produce high-fidelity audio outputs, with detailed and intricate compositions in any genre, from Shoegaze to Synthpop and any stranger banjo core hybrid that is cooking in the study of your room.
Lyria Realtime’s function puts the creation of AI in a virtual study with which you can get stuck. You can sit on your keyboard, and Lyria Realtime will help you mix environmental house rhythms with classic funk, interpreting and adjusting its sound on the fly.
Virtual Music Study
Sandbox offers three main tools to produce the melodies. Create, seen above, allow you to describe the type of sound you point out in words. Then, the AI prepares music samples that you can use as jump points. If you already have an approximate idea, but you cannot understand what happens after the second choir, you can load what you have and let the function extend arise with ways to continue the piece in the same style.
The third feature is called editing, which, as the name implies, reimburses music in a new style. You can request that your melody be reinvented in a different mood or gender, either playing with the digital control board or by text indications. For example, I could ask for something as basic as “this becomes a ballad”, or something more complex as “make this sadder but still danceable”, or see how strange you can get asking the AI to “write down this fall of EDM as if everything is just a section of oboe”. You can listen to an example below by Isabella Kensington.

Attend
Ai Singgalong
Everything generated by Lyria 2 and Realtime is marked with water using Google Synthid technology. That means that the tracks generated by AI can identify even if someone tries to pass them as the next demonstration of Frank Ocean Lost. It is an intelligent movement in an industry that is already being prepared for the heated debates about what counts as “real” music and what does not.
These philosophical questions also decide the fate of a lot of money, so it is more than only abstract discussions on how to define creativity at stake. But, as with AI tools to produce text, images and videos, this is not the death of traditional composition. Nor is it a magical source of the next success. The AI could make a half zumbo flatly falls if it is poorly used. Happily, many musical talents understand what AI can do and what it cannot, as evidenced by Tommy below.

Attend