…Either as an entirely new plugin or somehow added to Vital. See also this link and its thread in general.
The idea is to make the wavetable less static and more fluid/dynamic/natural/unnatural/mindblowing™. That’s if any of this makes sense of course and is implementable…
The idea I have in my head as a mere end-user is one where moving images (videos, maybe 3D animation files) are imported as layers and where each image, if not moving 3D model, is ‘displaced per frame’ into the 3D axes to get the moving sines/partials/harmonics and, to get the sound/ultimate waveform, is then scanned (via perhaps a simple scanline or via more complex shapes of scanline and/or multiple scanlines) and blended, sometimes (rather like how waves interact/blend in real life) in various ways with other displaced moving 3D topologies…
Moving fractal landscapes and their generation that have been around for awhile in 3D programs could also be used as part of the video imports. The trick may not necessarily be with one image/topology, alone, but more than one as they interact (like waves do in real life) with those in the other layers. Different blending modes too, maybe in realtime, maybe not.
WRT your Gold Water Dance video as inspiration, I suppose there could even be subfeatures that allow for the end-users to interact with each (layer) of the topologies, rather like how waves in a pond don’t just come from one source, but can also come from, say, skipping a stone across its surface (‘linear stone-skip oscillator’?) or just agitating it with one’s hand (‘single-point chaotic agitator; force: hand; force: tornado, etc.’?), or a frosting of wind (‘blanket wind: light’; ‘blanket wind: hurricane’, etc.?) . So we get a more complex surface, if it isn’t already.
We even have websites with realtime windmaps and of course tons of psychaedelic AI morphing animations that could be used as video imports, as well as increasingly-fast and accurate AI 3D model generation. Insofar as Vital had that text-prompt thing, so it could tap into AI for any new features mentioned hereon, including such as to maybe make some realtime processes viable.
Unsure how all this would transcribe to the actual sound such as to its useability, but there you go, FWIW.