Online Supplement
We introduce a new class of generative models for music called live music models that produce a continuous stream of music in real-time with synchronized user control. We release Magenta RealTime, an open-weights live music model that can be steered using text or audio prompts to control acoustic style. On automatic metrics of music quality, Magenta RealTime outperforms other open-weights music generation models, despite using fewer parameters and offering first-of-its-kind live generation capabilities. We also release Lyria RealTime, an API-based model with extended controls, offering access to our most powerful model with wide prompt coverage. These models demonstrate a new paradigm for AI-assisted music creation that emphasizes human-in-the-loop interaction for live music performance.
These samples demonstrate the model's ability to create smooth transitions between different musical ideas.
These samples are generated by using a fixed text prompt
"Groovy funky song with an intimate touch, perfect for a small self contained dance"
"An upbeat song with a latin or salsa feel"
"A classic pop-rock straight out of the 70s, with a very straightforward 4/4 drum beat, a very interesting instrumentation and a natural development that leads to the chorus. The very-low register bass piano notes are a very subtle, yet quite interesting feature of this song. It is a very cheerful, promising tune that would help me get up and be ready to roll for my day."
"Trance experimental electronic track with a weird sounding vocal sampled lead, hollow drums and a 8-bit sounding beat."
"A rather slow piano track creating a quiet and relaxed ambiance"
"Upbeat fast tempo with a blues rock feel that one can dance"