"Graintable" synthesis

Though Vital does not present its ability to do this natively to the user I think it’s really interesting. As has been noted before in other places, the 2048 sample-length of default wavetable formats corresponds to “F-1, detuned -22 cents”. This can be thought of, in Vital, as “the F above middle C, in an oscillator with -48 semitone detune and -22 detune”

What this means is that, at this frequency (F -1 detuned 22 cents) the wavetable in Vital is playing back the “natural sampled frequency” of its wavetable, at 44100 samples/second.

Now, Vital (as is the case with most wavetable synths) defaults to this “fidelity” because a 20Hz wave (the lower limit of human hearing) sampled at this frequency will be able to encode information up to 20kHz (the upper limit of human hearing, and basically the Nyquist frequency of normal 44.1kHz audio). So it’s a nice, “safe” wave size. And indeed, if you use Vital’s resynthesize preset to wavetable, it generates a note at this pitch (well, with the synth’s master voice detuned so that it’s this pitch) for 4 seconds (the duration required to cover exactly 256 “frames” at this sample rate).

So far, this is all public information. However, things get interesting when you drag a sample ONTO Vital and attempt to use it a wavetable. Specifically, if you import it “as a Wavetable” (rather than using Pitch Splice or Vocode, which involve resizing the FFT bins for pitch detection algorithm purposes).

If you do this, and you have the oscillator set to -48 semitones -22 cents and you play F above middle C, and you set LFO 1 to control wavetable position with a linear ramp, and you set LFO 1 to have a duration that matches the sample’s initial duration, and you set the blend mode to “file” or “time”, you will have created, in essence, a sampler.

What you’ve done is you’ve treated all of the non-harmonic content of the sample as though it were part of the harmonic partials of some much lower-pitched waveform (this works thanks to the magic of the Nyquist theorem) and divided it into small “grains”. Those grains have a duration of about 46 milliseconds. As you drag the wavetable position, you “slide” those grains around with crossfading.

You can adjust the size of these “grains” too, though doing so will involve some math. If you edit your wavetable, you can change the FFT “Window Size” to control the duration of each grain. You have to recalculate the “root pitch” if you do this, but using octaves it isn’t so bad (using 4096 as a window size, drop an octave. Using 1024 as a sample size, go up an octave. These each double and halve the duration of the grain, respectively)

You can also set Vital to “Vocode” spectral mode… in this mode, reducing the pitch of the oscillator will reduce the window size dynamically, but keep the sample playback rate the same (effectively extending/reducing the size of each grain). You can then use the spectral shift knob to “tune” the grains (Formant shift mode works similar, except it also reduces oscillator pitch, so you have to spin the Formant shift knob opposite your pitch motion to extend the grains). You can set a Macro up that adjusts the Formant and Oscillator pitch knobs in opposite directions and that Macro can manipulate “grain size” (the Formant Shift knob covers 2 octaves in each direction so you need to set your bipolar macro to connect to oscillator semitones bipolar in a 48 range, and to Spectral Morph Amount in bipolar with a -1 range)

Now, all that’s left is to use Unison mapped to the Wavetable position to create a “grain cloud”.

Tong Cloud.vital (440.8 KB)


Beyond using this for granular, I’ve realized that you can actually use Vital as a resynthesis engine here, in the style of Image Line’s Harmor or Unfiltered Audio’s SpecOps. You don’t have quite the degree of control normally afforded by those (since Harmor allows you do do things like manipulate spectral information by curve and SpecOps allows you to postprocess spectral bands individually, whereas Vital only has its warp modes for real-time partial manipulation) but you also have more control over oscillator phase than Harmor gives, AND an arbitrary number of frequency bands (by default, Vital’s WT engine breaks the audio into 1024 partials instead of Harmor’s 512, but you can set that window size yourself)… and it all happens immediately, without latency (unlike in SpecOps) and can be automated easily via Vital’s internal parameters.

Resynthesizer.vital (587.6 KB)

The patch description includes the following instructions, so you can remember how it works:.

Drag a sample into Osc 1 as a Wavetable. Ensure the wavetable mode is set to File Blend, Window Size is set to 0, and all Normalize and Remove DCs are untoggled. C5 will play back the sample at its original pitch. Note that to approximate Harmor’s Resynthesis more accurately, the sample FFT size should be set to 1024 and the oscillator coarse pitch increased 1 octave. LFO 1 controls playback rate; by default this is keytracked (so the audio will resynthesize faster or slower at different pitches), and LFO 2’s phase slider adjusts the base duration by extreme detuning the LFO 1 coarse pitch. If LFO 1 is set to other modes, playback tempo/rate can be controlled independently of pitch

So to start, you have to use LFO 2’s phase to “tune” the sample length to be correct, and the keytracking LFO 1 will play the sample back like a real sampler. This isn’t really useful but it’s the default because I think it’s impressive that it actually works. But it’s more interesting once you take control over what LFO 1 does

For example if you set LFO 1 to be in seconds, you can specify the duration of the original wave file (NOTE: Due to the 256 frame limit, you are limited to 4 seconds of audio to resynthesize) and from there, your keyboard acts as a pitch shifter.

If you set LFO 1 to Sync mode, the sample will play back and you can repitch it in real-time.
If you set LFO 1 to Tempo mode, you can lock a loop to the tempo of your DAW and retime it as you please.
Freeze LFO 1 and you can drag the wavetable slider directly as a “spectral freeze” which is a great effect, though be aware pitch information is retained.
If you map anything to LFO1’s keytrack pitch and Oscillator 1’s coarse detune at the same depth (once you have LFO 2’s length set up) you can use that parameter as a sort of “tape stop” that slows down the audio. I haven’t figured out how to make a scratcher yet, I think it would require a second keytracked LFO with a reversed shape and some bipolar stuff.
Oh, speaking of, LFO 1’s shape controls playback speed, so if you invert the ramp, you get reversed audio. Sort of. Technically each grain is still played forward, rather than in reverse, but it works for most cases.

The spectral warp modes also do interesting things; Harmonic Stretch repitches the audio. Inharmonic stretch is a linear pitch shifter (additive, rather than multiplicative). Formant Scale does like a granulizer or ring mod sort of effect (as explained above). They all have interesting properties when applied to a “spectrum” that’s not periodic.

EDIT: Oh, and if you manipulate the LFO 1 trigger modes, you can use it to do things like set up loop points in a sample. Since, you know, you can’t do that in the noise sampler. They’re even automatable.


Your incredible :grinning:

I’ve been thinking a lot lately that it would be pretty cool for a synth to kind of bridge the gap between wavetable and sample playback/resynthesis. Harmor was the only thing that ever came close but it’s really a pain to use. Vital gets very close, I’m blown away by the quality I get back from it.

It’s the sort of thing that would be far more powerful if there was a native way of doing it, though. Like a way of dragging a sample to an oscillator and saying “import as spectral bins” and have Vital preconfigure the oscillator to be set up so that it was playing back at native sample rate, whatever that was (because this method only works with 44.1kHz wav files, I have some stuff at other sample rates and you have to do more adjustment. And you’re already down at the lower end of the native Oscillator detune range so then you have to use an LFO/Macro to force the knob below it’s min position and it’s just generally a mess)

I also wish there was a cleaner way of extracting the actual playback rate, though in order to do that with the keytracked oscillator one thing that would be necessary would be for a table to import ONLY at the wavetable size that it is initially at, rather than stretching itself to fill all 256 frames. At least that way, once you had the duration set correctly, it would STAY set correctly unless you changed it. Or maybe a way of taking the size/duration of the wavetable and using it as an LFO sync option. But I guess it’s what I get for pushing Vital well beyond the bounds that Matt intended it to be used for.


Oh wow! I need to digest this somehow, it looks very nice. I tried to use import-as-wavetable for something like this but lacked the knowledge and willingless to calculate. Now we’re going somewhere! :clap::sweat_smile:

1 Like

You da man , Think Matt could use your knowledge ! way ahead of me :grinning:

I’ve been looking for a way to turn an oscillator into a sampler for a while, and your method is pretty awesome. I have one question though, as simple as it might actually be: how do you “correctly” export a sample? I tried to render 2 bars at 120 BPM (4 seconds) at 44.1 KHz and 256-point sinc (I’m on FL Studio) but when imported into Vital as wavetables, there are audible artifacts that no matter the playback speed, they’re always there. This doesn’t happen in your original Resynthesizer patch. Also, when playing the patch with LFO 1 set on Sync, I hear detuning artifacts (this happens in your original patch too).

I’d need to see your attempt at the patch to be sure; typically the issue has to do with not setting windowing correctly in Vital, which causes the reconstruction to be iffy. You shouldn’t have pitch artifacts at all though? Unless you’re not playing the correct note to reproduce the sample. Usually the artifacts are window boundary artifacts (between one wavetable frame and the next, there’s bad crossfading/interpolation, so the sample isn’t reconstructed exactly end-to-end as one frame stops and the next frame starts).

1 Like

I added my sample in OSC 2 of your patch with the same coarse and fine pitch settings, and recreated the LFO settings on LFOs 3 and 4.

Resynthesizer (Original + My Attempt).vital (961.7 KB)

What’s weird is that by playing the C5 note multiple times, the artifacts show up at different points in the wavetable playback, but they’re always there.

1 Like

phase randomization is at zero?

Yes, I also keep phase rand at 0 in my default preset.

it would be convenient to have a latch for midi notes while trying to create a grain type patch. i like to use the standalone sometimes for quick ideas and don’t like to load up a daw and create a midi clip, draw a note and listen to it retrigger every time it loops.

Nope! Sorry, but at least in the patch you uploaded, phase randomization was set at 100%. You also had wavetable normalization turned on which is less than ideal.

By swapping those two things I got this:

There’s still some artifacting on the low notes (which is something that you will get with highly tonal sounds if the window duration isn’t PERFECT); you can fix this if you set window fade to 1 instead of zero, though that will produce some fluttering on the higher notes for the same reason (now instead of being a tiny bit too long, the windows are a tiny bit too short).

Whoops! Must have set the rand at 100 to see if anything would’ve changed, and forgot about it. I remember leaving the normalization on because my sample was almost inaudible though. I also set the OSC 3 rand at 100% but can’t remember why I did that (I actually mistakenly cleared my default preset and had to go back to an old preset to restore it.) Anyway…

I did some testing yesterday with a new, different tonal/atonal 4 seconds sample, and… I had no issues. For whatever reason, the sample I was using before was bad… I’m going to do additional testing today, to see if I can recreate the issue.

The reason wavetable normalization isn’t great for this is, as far as I know anyway, it normalizes each frame of the wavetable independently, rather than the whole thing, which can cause boundart crossing glitches (though obviously with your sample that was less of an issue as it had fairly consistent volume throughout)

I should note, though, that the best results for this sort of sample (single note decaying over time) are typically had from using Vital’s internal resynthesize method, or at least manually computing the FFT window size to match the fundamental pitch. Using a baseline 2048 or 1024 wavetable size is the approach for sounds which don’t have a single clear fundamental pitch.

Okay, I made some more tests, and maybe I’m realizing even yesterday’s test had some problems, albeit much, much less noticeable…

This is what I got yesterday:

This is what I got now, with much more noticeable issues:

Again, the artifacts are related to the playback speed, because at different ones I get the low rumble in different places on the sample. Here both examples last 4 seconds, no keytracking.

Also, importing tonal/atonal samples as Pitch Splice results in other artifacts because of the atonal properties (there’s not enough resolution to represent atonal frequencies).

It’s fairly hard to diagnose issues by ear when I have no reference for how the original sample was supposed to sound. But what I will tell you is that the main issue you will run into is wavetable-oscillator synchronization.

What happens is, if a wave cycle doesn’t fit precisely in a wavetable, which is to say, if it does not zero-cross at the wavetable frame boundary, you will have very audible artifacts when attempting to play it back at an imperfect speed. This is because when the next wave table cycle begins, if it is not triggering the next waveform in sequence, rather than stitching together two disparate waveforms of one size into a continuous waveform of a different size, it winds up forcing a repeat with a discontinuity at the wavetable boundary. This introduces even harmonics (like a sawtooth wave) at integer multiples of the fundamental of the wavetable play rate (that low detuned F#).

Windowing will help with this because it will essentially crossfade the wavetables as you move through them so they will blend better, but this can produce ring-mod type artifacts at faster/higher playback rates because rather than the windows repeating, they repeat with an oscillating up-and-down amplitude.

The proper solution is to use overlapping crossfaded windows but of course Vital doesn’t do this because it’s a wavetable synth we’re forcing to behave as a resynthesizing granular sample playback engine.

EDIT: actually on this last point, I don’t strictly KNOW what Vital does to interpolate single wavetable frames. I think File Blend mode is Hann Windowing but I think it only works to create frames up to 256… if that’s so, it would mean that you get more “temporal resolution” using an audio file shorter than 4 seconds because each wavetable frame would have overlapping redundancy, which would smooth out the artifacs. Longer samples may be a liability using a fixed-size wavetable, I admit I haven’t experimented with this.

I’m sorry, I thought file names showed up for audio here on the forums. My bad! These two audio files contain the original sample in their first half, while the second half is the Vital result. Both 4 seconds.

I had figured out the key to having no artifacts was to satisfy the zero crossing requisite, in fact, importing a wavetable created through the editor which was exported as .wav, the result was perfect, no matter the playback speed.

Though, if I remember correctly, in another test that I did with a tonal/atonal sample not made by me, but available in a commercial sample pack, I got the “repeating grains” artifact, but without the low rumble/crumble F# harmonics one.

Though, if samples with imperfect crossing only play right when at the correct playback speed, isn’t it weird that I still get artifacts?

Are you sure the playback speed is EXACTLY right? If the audio file is 20,355 samples long and you play it back as though it were 20,351 samples long you’d get the artifact. I don’t know if any of Vital’s LFO duration values are THAT precise because they don’t need to be.

Interesting question. To export the samples (I did this more accurately for the second example) I set attack and release to 0, and hold to 4 seconds. The resulting render lasts 4 seconds, as read from the Edison editor. Actually, rendering just 2 bars at 120 BPM results in a render that lasts 4 seconds, but with the actual 0 seconds release happening a few samples earlier, so I render four bars (1 silence, 2 sample, 1 silence) and trim the start and end. Then I play them back in Vital as you taught in your Resynthesizer patch, but without keytracking, just 4 seconds, no smooth. This is the interesting part: neither do I know if the LFO is actually lasting 4 seconds… I wonder if it’s somehow an issue with my setup, or just with Vital.

EDIT: Yes, the majority of the commercial samples I tested were shorter than 4 seconds, while the trimmed 4-seconds ones were tricky to establish if they were playing back correctly or not.

EDIT 2: Also probably worth mentioning that exported .wavs from Vital are tuned to F2 and -22 cents (which checks out), but they last 5.944 seconds, at 88.2 KHz and 16 bits.