AI Patch Creation Tool

Hey guys, Check this out…

And a youtube vid of how to use it

Even if it only getss close it’ll be a great place to start tweaking from

AA xx

7 Likes

finally!

2 Likes

I haven’t tried it yet but this is kind of a big deal.

The most telling sentence in that video was in comparing it to Synplant 2 and saying that Synplant had a larger “parameter space”. Which seems untrue, Vital’s parameter space is absolutely enormous. So I do sort of wonder just how much of Vital is exposed to this program under the hood; does it not access the wavetable editor? Spectral or waveform warp modes? Audio rate modulation? Etc. basically every patch we saw this produce was a simple subtractive, at which it seems reasonably competent but that’s also sort of the shallow end of the sound design pool.

It’s still a cool idea for a tool but I’d be interested in exactly what it’s manipulating and in seeing it tried out on a much larger sound palette, especially FM-based or warp/fold based stuff

1 Like

any day now we will have ai that can make an audio to DX7 patch converter and we’ll get some patches that nobody’s been able to make yet. (edit: let me just ask chat gpt how to go about doing that)

1 Like

"
AI

Creating a deep learning tool that analyzes audio data and converts it into a Yamaha DX7 patch requires several steps. First, you’ll need to gather a large dataset of audio samples and corresponding MIDI files. Then, you can use a convolutional neural network to extract features from the audio data. Afterward, you’ll need to train another neural network to map the extracted features onto the parameters of a Yamaha DX7 patch. Finally, you can use the trained model to convert incoming audio data into a Yamaha DX7 patch in real time. It’s a complex process, but it’s definitely possible with modern machine learning techniques.

onverting a complex audio waveform into a 6-operator FM synthesis algorithm involves several steps. First, you’ll need to preprocess the audio signal to remove any unwanted noise or artifacts. Then, you can use a Fourier transform to analyze the audio signal and extract its frequency components. Next, you can use these frequency components to modulate a carrier signal generated by a phase accumulator. Finally, you can use six operators (usually sine waves) to generate the desired output sound. This process is known as Frequency Modulation Synthesis, and it’s commonly used in electronic music production.
"

1 Like

As interesting of an idea as it is to use the overtones of a signal as modulators in an FM patch (I think Toybox’s Atomic Oscillator for Reaktor Blocks does this) that is 100% not how you would reverse engineer an FM patch at all.

The approach (from someone who understands how a neural network works) would be to take a large selection of Vital presets and randomly record some sequences of MIDI notes performing them. You’d match up the audio snippet to the preset (as a list of parameter values) in a series of pairs. Then you have a neural network take the FFT and amplitude curves of the audio snippet (there are a few other “observables” as well that might be useful) as inputs and drive a neural network that outputs vital parameter values, and use gradient descent to drive those values towards the known values of the preset.

Basically exactly what the OP tool does, the trick is just WHICH parameters you expose as outputs for the network to train on.

i thought it was a little bit amusing to ask my locally installed LLM how to do it and it tried to answer. (nous-hermes-13b) but for sure, humans could probably do a much better job of it. there must be a pure math way to reverse some program audio material to something that vaguely resembles it with enough FM oscillators though, using ADSR type envelopes. even if it gets close it would be fun.