Andy's gives conf writers a big leg up when he detects and provides them with the BEAT of the music. AFAIK this is first instance of pattern recognition in sonic visualization, and in any case it surely will be followed by more.
Who will be the first to use the beat to detect the musical genre and use that to guide the config? What about recognizing individual instruments by their signatures? Guitar. Brass. Piano. Female voice. The FFT(t) has all the information (heh). I don't mean to say this is easy, but it's gonna happen. There's active research in recognizing speech and muscal instruments; see http://www.auditory.org/postings/2003/376.html for example. Imagine how config writers could use *this* information.
Some possibilities: Dancing notes on the staff. Specific musical instrument sprites and particles in motion. Wave shapes and flow fields that react to this information.
Further down the line: G-F allowing configs to stash information about a certain track for use next time it's played. Using lyrics as a track plays. The only limit is the imagination, to use a trite phrase.
I hope Apple and Andy's relationship continues and grows, that should provide some $$$ fuel for continued development: G-F could use an on-screen gui-based preference window as has been suggested several times in this forum. Config development would be speeded by a higher-level front end. Maybe I'll hack a simple example of how this could work.
Once again, my thanks and admiration to Andy and all you great config writers, particularly Rovastar and JayPro.
What's next in music visualization?
Moderators: BTT, andy55, b.dwall, juxtiphi
I had made a GF preferences GUI using VB about a year or two ago, but that was when there weren't as many and as complicated prefs to configure. If people want help understanding what each preference does, isn't there documentation online or in the preferences text files themselves?G-F could use an on-screen gui-based preference window as has been suggested several times in this forum.
I had expressed a similar vision in a post from quite some time ago. It would be an incredible innovation, but I think this would be extremely hard to do in real time with music unless the visualization is given a seperate audio track for each instrument or part of the music.What about recognizing individual instruments by their signatures?
I think it would be a lot more feasible to create algorithims that could detect changes in music such as build ups, fade outs, breakdowns, or a sudden change to loud or quiet noise. Based on when these occur, the visualization program could react a lot more intutively to the music as opposed to transitions and changes happening at random moments. While there are limitations to detecting some of these types of changes in real time such as determining how long a fade in/out lasts, programs could be developed to analyze the music before hand and generate a script (being a pre-defined list of visualization actions) based on it's findings, much like the way people make scripts based on music.
He already provides a BASS variable for configs that can be quite good for making something change to the beat if you make the change exponential. I am assuming by "BEAT" you are referring to BPM? I could see this being very beneficial to script writers, but I'm not sure how it would be of much use to config writers (configs being the different parts of the visualization, such as WaveShape).Andy's gives conf writers a big leg up when he detects and provides them with the BEAT of the music.