Spotify Users Enjoy Music by AI-Generated Band Without Knowing

Introduction
Generative AI has transformed digital art, and music is the latest frontier. In June 2025, Spotify listeners encountered a new group, The Velvet Sundown. The band amassed over 300,000 unique monthly listeners within two weeks, surprising even industry experts.
The Rise of The Velvet Sundown
This AI-generated project dropped two full-length albums, Floating On Echoes and Dust and Silence, on June 10th and June 20th respectively. Each album comprises 10–12 tracks, rendered in 16-bit/44.1 kHz MP3 and 24-bit FLAC formats. The output models used a hybrid transformer architecture with 256 attention heads trained on a 50,000-track rock dataset. Vocals are synthesized using WaveNet-based vocoders, resulting in a classic rock timbre topped with auto-tune effects.
Unmasking the Illusion
Reddit and X threads flagged discrepancies: nonexistent band members, identical waveform patterns, and suspicious playlist placements. By June 27th, an AI-generated Instagram account revealed image artifacts—symmetrical features, blurred backgrounds, and inconsistent object counts. Photos purportedly showing the band celebrating with burgers had floating utensils and odd lighting, classic signs of GAN output.
Technical Clues
- Repetitive ambient noise loops indicating sample batching.
- Uniform loudness normalization across tracks.
- Spectrogram analysis revealing spectral gaps between 5–8 kHz.
Technical Anatomy of AI-Generated Tracks
Under the hood, The Velvet Sundown’s tracks originate from music-generation models akin to Google MusicLM and OpenAI Jukebox. These models leverage self-supervised learning on unlabeled audio, using 1D convolutional layers followed by transformers to capture temporal dynamics. The final mix employs automated mastering pipelines—compression at a 3:1 ratio, limiting peaks to -1 dBFS, and stereo widening via mid/side EQ.
Model Specifications
- Transformer depth: 24 layers
- Embedding dimension: 1,024
- Training dataset: 100 million+ minutes of rock and pop tracks
- Inference hardware: TPU v4 or NVIDIA A100 GPUs
Industry Response and Regulatory Developments
Spotify, unlike Deezer, does not mandate AI disclosure. However, the European Union’s Digital Services Act, effective August 2025, will require platforms to label AI-generated content. Spotify has begun internal tests of an AI-content-detection API, aiming to integrate watermark scanning by Q4 2025.
“As generative AI scales, transparency tools will be essential,” warns Dr. Emily Bender, Professor of Computational Linguistics at the University of Washington. “Users deserve to know when art is human-made.”
Detection and Watermarking Strategies
Researchers from Stanford’s AI Lab propose embedding inaudible digital watermarks at 96 kHz sampling. These signals can survive downsampling and compression, allowing platforms to trace AI origin. Open-source projects like the Invisible Music Watermark (IMW) use spread-spectrum techniques to encode model metadata into phase‐vocoder parameters.
Ethical and Legal Implications
AI-generated bands raise questions about copyright and royalties. Current laws in the US and EU treat AI outputs as machine-generated, thus placing them in the public domain. Artists argue this disincentivizes creativity, while labels see potential in cost-efficient content generation.
Key concerns include:
- Ownership disputes for training data.
- Potential flooding of streaming platforms with low-quality content.
- Impact on royalties and revenue distribution.
Looking Ahead
The Velvet Sundown case underscores a broader shift: from AI-assisted production to fully synthetic acts. As models refine, distinguishing human from machine output will become more difficult. Platforms, regulators, and the music industry must collaborate on standards for labeling, detection, and equitable revenue sharing.