Dynamic 'WaveCubes/WaveMaps'

…Either as an entirely new plugin or somehow added to Vital. See also this link and its thread in general.

The idea is to make the wavetable less static and more fluid/dynamic/natural/unnatural/mindblowing™. That’s if any of this makes sense of course and is implementable…

The idea I have in my head as a mere end-user is one where moving images (videos, maybe 3D animation files) are imported as layers and where each image, if not moving 3D model, is ‘displaced per frame’ into the 3D axes to get the moving sines/partials/harmonics and, to get the sound/ultimate waveform, is then scanned (via perhaps a simple scanline or via more complex shapes of scanline and/or multiple scanlines) and blended, sometimes (rather like how waves interact/blend in real life) in various ways with other displaced moving 3D topologies…

Moving fractal landscapes and their generation that have been around for awhile in 3D programs could also be used as part of the video imports. The trick may not necessarily be with one image/topology, alone, but more than one as they interact (like waves do in real life) with those in the other layers. Different blending modes too, maybe in realtime, maybe not.

WRT your Gold Water Dance video as inspiration, I suppose there could even be subfeatures that allow for the end-users to interact with each (layer) of the topologies, rather like how waves in a pond don’t just come from one source, but can also come from, say, skipping a stone across its surface (‘linear stone-skip oscillator’?) or just agitating it with one’s hand (‘single-point chaotic agitator; force: hand; force: tornado, etc.’?), or a frosting of wind (‘blanket wind: light’; ‘blanket wind: hurricane’, etc.?) . So we get a more complex surface, if it isn’t already.

We even have websites with realtime windmaps and of course tons of psychaedelic AI morphing animations that could be used as video imports, as well as increasingly-fast and accurate AI 3D model generation. Insofar as Vital had that text-prompt thing, so it could tap into AI for any new features mentioned hereon, including such as to maybe make some realtime processes viable.

Unsure how all this would transcribe to the actual sound such as to its useability, but there you go, FWIW.

I found this some time ago and, while it was/is interesting, it seemed kind of slow/kludgy/limited.

But my main point/question is why the wavetables, themselves, have to be relatively-static and/or why can’t they borrow (the code and concepts, etc.) from other kinds of apps, like 3D and other model and video-image-generators and simulations as well as maybe using AI for some kinds of optimization, including realtime, on-the-fly (wave, etc.) generation, such as WRT the most recent research and results in AI gaming, where the immediate landscapes/environments are generated on-the-fly, rather than designed beforehand.

See also…

AI Slashes Fluid Simulation Times Fifteenfold
Wave Simulation from scratch using finite difference method

Toward Wave-based Sound Synthesis for Computer Animation

Ocean waves simulation with Fast Fourier transform

You have to see these sound waves

Spectrum Waves in Unreal - Ocean Waves Simulation

blender anime ocean waves animation

Sound wave simulation

Cinema4D: Water Cube Animation

1 Like

Out of curiosity, I selected other videos by the first outfit mentioned above (SonicLAB) and noticed that they do some 3D fluid dynamics in a cube-- a ‘dynamic 3D wavetable’ if you will-- to get sound…

"Protean simulates fluid particles and uses only sine waves to render its sound output. The quantity of these is displayed as ‘Active Partials.’ "

“PROTEAN… uses fluid dynamics and relevant particle forces/interactions to render and organize hundreds of sine waves. Designed & developed by Sinan Bokesoy with presets creations by Ryan Pryor, Laurent Mialon, and Daito Manabe. Protean represents our most ambitious development to date—a true bridge between sonic art and science. Available for OSX and Windows on the sonicLAB/sonicPlanet web-store.”

And of course there are also simulations of animal herds running about, as well as bird murmurations:

…As well as AI-morphing images (which could help smoothly-morph the waveforms/sound if their images were used to extract the waveforms).

It seems that those kinds of 3D graphics/simulation/animations code, including raytracing and maybe AI animations and optimizations-- all ostensibly being already/readily-available-- could be somehow borrowed/converted/remapped/modded/leveraged to/for useful/effective sound synthesis/controls.