That’s like the definition of a granular synth. It doesn’t make them any less of a synthesizer. The source doesn’t matter, what a synth does is that takes audio source, manipulates it, and puts it out according to user defined parameters. Synths are basically effect chains with primary sound source.
What sets them apart from samplers imo is that a sampler plays back samples that can be used as they are, where a synth uses sampled material to construct a new kind of a signal. The line seems blurry to me, since surely one can use many samplers like a synth, but rarely can use a synth like a sampler.
AFAIU, granular ‘synths’ don’t reinterpret the audio, though. They use it as-is and just cut it up/spit it out in fancy ways. Even AI seems to be properly reinterpreting the audio.
As per your previous comment, it does seem like that, like Vital is kind of what I might feel is cheating/kludging/‘hybridizing’ its way to sound design and isn’t really approaching it from a kind of ‘purist’ standpoint, say by using 516 oscillators to produce 516 sine waves and that’s that.
Maybe I should be looking at Virsyn’s upgraded 64-bit Cube 2 or Razor or Parsec?
What do you think of physical modeling synths? Seems some approach ‘physical modeling’ differently too.
If we want to talk effect chains, seems granular synths are glorified equalizers, yes? But again, my feeling about it, at least so far, is of granular synths as not really being synths in a pure sense, but more like fancy samplers because they are using the same audio.
The differences can be subtle but probably best explained by an impersonator impersonating your voice, say. It’s not your voice anymore, but a reinterpretation– a synthesis – of it. To me, that seems like real synthesis.
There’s what Vital calls “wave source” that’s actually additive harmonics.
When and where is that applied and could one work entirely in that part/section/realm in Vital without touching the rest? If so, how flexible is it?
It doesn’t make them any less of a synthesizer.
Sorry, Herman, but to me, it does.
It would be just impractical to play back 512 or 1024 sines real time to construct a waveform in terms of processing economy. If your use case requires that ig you gotta use that sparingly, bounce it or wait until we have more powerful processors to be able to use that kind of synthesis extensively in production. Playing a single synth live might work okay if your rig has enough processing power, but we’re really hitting the limits here.
Other than that there might be synths that use trigonometric maths to generate certain waveforms, but they’re then bound to the limits of those.
But I just mentioned 3 additive synths that have been around for over a decade-- Razor, Parsec and Cube 2. I’ll add Harmor.
So why should current rigs not be able to handle what rigs could ~10 years ago, and therefore, by implication, Vital, if I can use it strictly in the additive sense? Can I? Did you not just write, “There’s what Vital calls ‘wave source’ that’s actually additive harmonics.” ?
Wave source is one of the wavetable editor’s primary sound sources. As said, that likely does get rendered to sampled wavetable frames non-realtime as reaching the nyquist frequency from the bottom of human hearing would require about a thousand additive sines that would eat up so much cpu time that it’s hard to justify that with the current hw tech. As said, you can get a taste of that with Zebralette3.
Gotta admit my technical knowledge doesn’t stretch to this, if someone is able to shed light on the subject I’d surely be interested in reading it.
1 Like
LOL, let’s get some rest and wait and see then. I’m heading out. Until next time.
1 Like
It’s not an abyss, it’s just that golf is boring
Still a bit funny that you’re not in the analog camp when discussing purity and holism of sound reproduction - not even a mention of euro racks??
I joke ofc, @glomerol id recommend getting some books on the mathematics and design behind sound reproduction, it’ll help answer some of these questions
I’ve done some deep dives myself but in the end I decided that time spent researching is less time creating music
Whats more important, the tools used to create or the creation itself? An age old art conundrum, but that’s the best part - it’s art - you get to make up questions and make up your own answers. My tip is to pick and synth and start making music (or sound design, patches, etc whatever your thing is)
Don’t let holism consume you Aristotle, unless that torment searching for your answer makes some sick music
1 Like