multi-source-diffusion-models

View the Project on GitHub gladia-research-group/multi-source-diffusion-models

Multi Source Diffusion Models

Generation

Here we ask the neural model to randomly generate some new music with just piano and drums:

Sample #1 Sample #2
Sample #3 Sample #4
Sample #5 Sample #6



Source Imputation (a.k.a. partial generation)

Given a drum track as input, the neural model generates the accompanying piano from scratch:

Input Drums Track 1

Sampled Piano #1 Sampled Piano #2



Input Drums Track 2

Sampled Piano #1 Sampled Piano #2



Similarly, given a piano track as input, the neural model is able to generate the accompanying drums:

Input Piano Track 1

Sampled Drums #1 Sampled Drums #2



Input Piano Track 2

Sampled Drums #1 Sampled Drums #2



Source Separation

Finally, it is possible to use our model to extract single sources from an input mixture:

Input Mixture 1

Separated Bass Separated Drums
Separated Guitar Separated Piano



Input Mixture 2

Separated Bass Separated Drums
Separated Guitar Separated Piano