How to: write a soft synthesizer
category: code [glöplog]
g.: video, as in, not youtube? I don't have the original, but I could probably get ahold of it if need be.
no youtube is totally cool. found it
googling for slides now
Ah, never uploaded the slides anywhere :) I'll do that real quick.
btw, anyone remember what the old northern dragons talk was that went over getting started with 4k's? I think it was Pilgrimage '05 or something, but it has an awesome starting point for synth coding with more or less the same approach as mu6k's (and mine, back then) but more example code to get started with. I believe the source to sprite-o-mat helped me as well :)
Ferris: this?
Sharing the good stuffs. I played with this http://blargg.8bitalley.com/bl-synth/
I like the approach, for that it gives band-filtered square waves with simple, compact yet efficient code. To get other sounds than square waves out of this, you approximate things with square waves. Works great to a get a "80's chiptune" feel (actually, it's used in several 8-bits console emulators).
I like the approach, for that it gives band-filtered square waves with simple, compact yet efficient code. To get other sounds than square waves out of this, you approximate things with square waves. Works great to a get a "80's chiptune" feel (actually, it's used in several 8-bits console emulators).
raizor: YES :D awesome; I never actually saw the video :)
marmakoide: yeah, that's an awesome resource as well :)
random ugly thing here. anybody knows where this code is from?
i have that lieing around in some unfinished something. i'm sure it was some code from here. really just some ugly tiny thing.
Code:
struct T_LRC
{
float iL, // Spulen-Strom
iC, // Kondensator-Strom
uC; // Kondensator-Spannung
float reso, cut; // Widerstrand, L = C
};
void inline lrc(T_LRC &flt, float in ) // LRC bandpass
{
flt.iL += flt.cut * flt.uC;
flt.iC = (in - flt.uC) * flt.reso - flt.iL;
flt.uC += flt.cut * flt.iC;
}
Code:
amp = 1.0f;
f = .025f;
c1 = 1.0f;
s1 = 0.0f;
...
// --- ta ---
sawenv = svalue;
tsaw += sawfrq - (int)tsaw;
flt.cut = 0.9f;
flt.reso = 0.9f;
lrc( flt, tsaw );
// --- bd ---
amp *= 0.9997f;
f *= 0.9997f;
s1 += c1 * f;
c1 -= s1 * f;
float outfront = flt.iL*sawenv+(s1*amp);
i have that lieing around in some unfinished something. i'm sure it was some code from here. really just some ugly tiny thing.
what kb_ said.
Quote:
That said, I _have_ been thinking about doing some walkthroughs like Gargaj's bass video, though; that's a format that actually makes sense. I mean, in sound design especially, I personally think it's all in the details/tweaks, but someone can follow the broad outline of what you're doing and be able to make something with their own character in no time. It's a good springboard for sure.
In all fairness, that idea is not a softsynth specific thing - plenty of good (and not-so-good) sound design tutorials on YouTube. Applying those techniques to a synth is usually relatively straightforward.
I think I have enough hints to start learning softs! Thank you guys very much!
Really helpful topic!
What is the way of generating the music? Prerender the whole thing at startup or just render blocks of samples as the demo flow?
What is the way of generating the music? Prerender the whole thing at startup or just render blocks of samples as the demo flow?
kt: I suppose pre-rendering in a 64k isn't a completely unreasonable option anymore given how fast CPUs are, but generally in 64k you generate / mix realtime. In 4k the songs are usually so short and simple (few instruments, few effects) that rendering the whole thing in memory usually only takes 10-20 seconds, but it wins you considerable space by not having to buffer your DSP effects.
In 64k's you gen/mix realtime if your synth is fast enough, i know some people that had to just pre-render last year due to some synch bugs (don't know exactly why they had bugs though).
About 4k's.. if you pre-gen the music you only need to do one simple "render-pass" and then ask windows to play that huge sample, the amount of code is really small compared to creating a thread and keep buffers updating on the fly as the song progresses.
About 4k's.. if you pre-gen the music you only need to do one simple "render-pass" and then ask windows to play that huge sample, the amount of code is really small compared to creating a thread and keep buffers updating on the fly as the song progresses.
.. or you allocate one big buffer for the whole sound track, delegate sound rendering to another thread, and just start playing the buffer hoping that the sound thread renders faster than the buffer is played back. With multicore CPUs (and you can pretty much assume everyone has one nowadays) that's actually a viable thing to do.
Pre-rendering very heavy synth instruments to samples if you don't change the parameters for that instrument is a one good optimization trick. Vesuri put that to good use in http://pouet.net/prod.php?which=54136
kb_, that sounds pretty reasonable. But of course the API you're using must guarentee to play the sample from the memory location you've given. Do any do that?
revival, waveOut does. Check the example code in the 4Klang source. :)
Quote:
what kb_ said.
I wonder if trc_wm was looking into the future there?
Haha, that was the one API I was CERTAIN you couldn't use that way. Oh well :-) Thanks.
Well, given how old the API is, they surely didn't want to create buffer copies back then. :D
yumeji: it's from the dialogos 2001 4k src by delta9 & franky, check http://www.active-web.cc/html/research/bin/4ksrc.zip