© Google
Artificial IntelligenceNews

Google’s Gemini adds AI music generation with Lyria 🤖🎶

Cliquez ici pour lire en français

Gemini can already write, analyze images, and generate visuals. Now, it can also compose music.

With Lyria, the new music model from Google DeepMind, built directly into the Google Gemini app, Google is pushing its AI assistant into a new frontier: on-demand, AI-generated soundtracks.

An AI that turns ideas into music 🎵

With Lyria 3, Gemini takes another step forward. Describe a concept, a mood, or even upload a photo, and the AI will generate a 30-second track complete with instrumental, lyrics, and vocals.

This isn’t about crafting the next chart-topping masterpiece. It’s about creating short, fun, personalized soundtracks for everyday moments, social posts, or creative projects.

Type something like: “an upbeat afrobeat track to celebrate a promotion with colleagues, with lyrics about success and solidarity” — and seconds later, you’ll have a ready-to-share song. Gemini even generates cover art for your track using Nano Banana, packaging it like a proper single.

The pitch is clear: frictionless music creation, no DAW required.

How Lyria works inside Gemini 🎛️

Lyria 3 positions itself as a high-fidelity music generator that turns simple text or images into coherent audio tracks. Google highlights three key upgrades:

  • No need to write your own lyrics
  • Finer control over style, voice, and tempo
  • More realistic and musically complex compositions

Inside the Gemini app, you get a few creative paths:

Text to music:
Describe a genre (R&B, afrobeat, rock, ’80s synth-pop), a vibe, or even a personal memory. Lyria generates a track, with or without lyrics.

Image or video to music:
Upload vacation photos or a clip of your dog, and Gemini composes a soundtrack whose lyrics and mood align with the visuals.

Tracks are capped at 30 seconds, but they’re optimized for instant sharing or download — especially for vertical formats like Reels, Shorts, and Stories.

A new playground for creators 🎬

For content creators, Lyria is more than a novelty feature. It plugs directly into Google’s broader ecosystem, especially on YouTube.

The model already powers Dream Track, an experiment that lets select creators generate unique soundtracks for Shorts — including partner artist voices in tightly controlled settings.

With Lyria 3, Google promises cleaner tracks, broader stylistic range, lyrics that stay faithful to your prompt, and more granular creative control. For videographers, journalists, and indie creators, that could mean less time digging through “almost right” royalty-free libraries — and more time building a distinctive sonic identity for each piece of content.

Assisted creativity — and ethical gray zones ⚖️

Google is keen to stress that Lyria is designed for original expression, not imitation. If you reference a major artist in your prompt, the model draws on general stylistic cues — not the artist’s exact voice or signature quirks. Output is also filtered against existing works to reduce overly close matches.

Every track generated with Lyria is discreetly tagged with SynthID, Google’s watermarking system for AI content. Within Gemini, you can even upload an audio file and ask whether it was generated using Google’s models, with the assistant combining watermark detection and its own reasoning capabilities.

Still, the broader question looms: how far should AI models go in music creation without destabilizing artists, producers, and the wider music economy?

A new era for everyday music 🎧

By bringing Lyria into Gemini, Google is making a bold bet: AI as a co-creator for everyone — not just for images and text, but for sound.

We’re moving from AI that illustrates your ideas to AI that scores them, instantly.

Now it’s up to creators, musicians, labels, and platforms to decide how to embrace — or regulate — this new tool. One thing is certain: after text and visuals, the generative AI race is now playing out in your headphones.

So, how far would you go with Lyria? 💬
Would you use it to score your videos, podcasts, or even rough musical demos? Or do you prefer to keep your projects 100% human-made? Share your thoughts, concerns, and experiments in the comments — the debate is just getting started.

Source : Frandroid

📱 Get our latest updates every day on WhatsApp, directly in the “Updates” tab by subscribing to our channel here  ➡️ TechGriot WhatsApp Channel Link  😉

Qu'en avez-vous pensé?

Excité
0
Joyeux
0
Je suis fan
0
Je me questionne
0
Bof
0

Vous pourriez aussi aimer

Laisser une réponse

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *