DeClipper

Give "the take" another chance! Using DeClipper's custom AI model, you can take audio recordings that clipped on input and repair them in post, learning from the rest of the file to reconstruct the missing waveform peaks.
Explore the code on GitHub

DeClipper Logo

How it works

DeClipper is still in the development and fine-tuning stage, but its training workflow is simple. The model takes in unclipped audio, pre-processes it to add gain until clipping and distortion occur, then passes the processed audio to the model in the form of a Mel Spectrogram. The model then performs down-convolutions to extract relevant feature data, then performs up-convolutions to return its best guess at the original audio waveform. The loss function is then updated, back-propagation updates the weights and biases, and the next waveform is fed in.

Progress

So far the basic training loop is established, and a relevant dataset has been used to begin training. However, there are many fine-tuning bugs being worked out to make the model's guesses more accurate. My next step is to implement multimodal learning, passing the model both the plain waveform representation as well as the Mel Spectrogram representation in hopes to increase its accuracy. I will update the website as progress is made!

Final Product

The goal with this project is to create a neural network that is lightweight enough to build into a standard audio plugin, and that is able to run on any system that could also run a traditional DAW. This would allow editors to integrate the tool into their daily workflow and make use of takes that would otherwise have to be re-recorded. For the time being, I intend to train this model on dialogue for the purpose of film editing, but in the future I would like to include the MUSDB18 dataset to expand its functionality to more musical applications.