Audio in (6)4K intros
category: general [glöplog]
Why would you use an FFT for beat syncing if you have the note data? And I wouldn't want to waste valuable space on a FFT/DFT in a 4K, such as Kindercrasher.
in a 4k with softsynth, you have the outputs of your envelope generators. these are pretty much perfect for syncing.
Depends. There are some non-realtime softsynth designs ( forall(notes,n) render_note_into_buffer(n) :) where you don't get the EG outputs when you need to. But parsing the note data in a slightly different way is mostly ok.
When you render the song you can also put envelope values into a buffer(s) just like you do with the sound. I hope i explained it somewhat reasonable. Then you use waveOutGetPosition or similar. Then you can still have the song prerenderd.
trc_wm, the DFT code in Kindercrasher isn't more than 50 bytes anyway, and it gived quite a lot of rich information to the visuals (definitely more than noteon/noteoff), although it's not the kind of info you need for synchronization, I agree.
*gave
iq: did you apply any windowing of the audio data, prior to taking the FFT?
like a hamming/hann/kaiser etc? Of course NOT :) :) :)
and it's not an FFT, it's a DFT (the direct definition of the transform, which is not fast, but small - and "not slow" enought)
FFT, DFT, same thing, different implementation.