That seems the catch or part of it; the model-- sines, wavetables, complex waveforms and/or physical models, etc.-- as determining the approaches and results/the ‘sound/character’, maybe flexibility. As an aside, I had been somewhat recently looking into Impulse Responses (IR), 3D sound and panning and even the idea, inspired by my knowledge of 3D, of ‘ray-tracing’ an artificial environment for some kind of IR and speaking of physical modeling. If you can import a free Blender model of a cityscape or kitchen, you have an IR right there. Maybe the audio gets ‘raytraced’ as though it were a ray/wave/photon of light.
I will mention the Syrum 2’s new ‘spectral’ oscillator. I’m unsure how that might compare and contrast to ‘additive’ (easier, for one, on the CPU?), but I guess that can be part of the discussion and research. (Can the spectrum move like a landscape/waterscape or video, and morph into others? Scanline shapes/movements?) Anyone try the new Syrum 2? I know that some have been bugged by Vital’s dev-cycle, so there it is, FWIW.
Might be interesting if the idea of additive and physical modeling somehow merged. Even genetic algos as applied, say, to waveform evolution over time. A ‘biosynth’. Synplant might be a bit like that perhaps, but unsure how it compares to my idea of something more automated, realtime and real genetic (‘sexual’/multigenerational waveforms over realtime). There is or was a free synth called, Trilobite, for example, although it looked non-realtime and tedious and escaped my interest at the time.
Sticky-note to self: Maybe mention the case of the video synthesizer as applied to audio/‘moving-terrain’ synthesis (non-static/evolving ‘wavetables’/‘scenes’/events, adding ‘wind’ and ‘rain’, etc., parameters, etc.).