How to create music with just text and whistles with artificial intelligence

0
3
How to create music with just text and whistles with artificial intelligence
1674937845 how to create music with just text and whistles with.jpg
Google announced a tool with which audio can be created through descriptions of situations, paintings, moments and places.

Google unveiled its new artificial intelligence (AI) to generate audio files via text descriptions. The name of this tool is MusicLM and for its advertisement a large part of its forms of use were shown.

For its operation, the system is trained with a storage of 280,000 hours of melodies, from there it generates the content that is requested by someone based on the description of a phrase, situation and many options. “MusicLM is a model that generates high-fidelity music from text descriptions such as ‘a relaxing violin melody backed by a distorted guitar riff’, its developers explain on the official AI page.

But in addition to creating composed songs, he also has the ability to create melodic pieces by whistling and humming to the rhythm that is called for.

Google announced a tool with which audio can be created through descriptions of situations, paintings, moments and places.

Google announced a tool with which audio can be created through descriptions of situations, paintings, moments and places.

The potential of this artificial intelligence

The announcement of this new tool is accompanied by several examples of how to use it, assuming that the limits are what someone can come up with because of a wide variety of usage options.

The first parts of ideas directly related to music, such as creating an audio piece mixing various genres such as reggaeton and electronics and including “a spatial and otherworldly sound”.

But there are also uses for creating the soundtrack of an arcade video game, giving it precise instructions such as that it have a “fast, upbeat beat, with a catchy electric guitar riff.”

The examples are extended to more specific situations, such as creating pieces to go for a run, meditate, wake up or give 100% in a situation. Which expands the range of creativity to combine with other ideas.

Google announced a tool with which audio can be created through descriptions of situations, paintings, moments and places.

Google announced a tool with which audio can be created through descriptions of situations, paintings, moments and places.

One of the most striking points is the possibility of describing works of art. In the ad they put several examples of how a painting can be turned into music, based on the description of it.

One of them is The Persistence of Memory of Salvador Dali, who describe it as follows: “Her images of melting clocks mock the rigidity of chronometric time. The watches themselves look like soft cheese; in fact, according to Dalí himself, they were inspired by hallucinations after eating Camembert cheese. In the center of the painting, under one of the clocks, there is a distorted human face in profile. The ants on the plate represent decomposition.”

And the result is a calm piece of music, as if you were on the beach in the middle of a trip and with soft tunes.

An example that they repeat with iconic works such as Napoleon Crossing the Alps by Jacques-Louis David, dance by Henri Matisse and The Scream by Edvard Munch.

But the possibilities are endless because they generate audios that represent what would sound like in a place, the chord of a musical genre, different decades or levels of experience of a musician.

Finally, they gave several examples of how the same request can create different pieces, thanks to the large number of hours of music with which this artificial intelligence is designed.

For now this tool is not available to all users and, according to TechCrunchGoogle does not plan to launch it in the short term due to the challenges that this represents due to copyright and copyright issues.

Much of this was because the researchers found that at least 1% of the pieces generated were inspired by existing music.

“We found that only a small fraction of the examples were memorized exactly, while in 1% we identified an approximate match. We insist on the need to continue working in the future to address the risks associated with the generation of music”, they stated.