Acid Glow by Loonies [web] & Logicoma [web]
Loonies and Logicoma present Acid Glow A 4k intro for Windows Winner of the TRSAC 2022 PC intro compo Credits: Blueberry: Synth, 303 emulation Booster: Music Ferris: Visuals Psycho: Framework, optimizations This intro is the debut of Jingler, a work-in-progress modular 4k synth. Follow the development at: https://github.com/askeksa/Jingler The emulated 303 synth and the other instruments and sound effects are implemented in the Zing programming language, which is part of the Jingler synth. The Zing compiler translates the Zing program into special bytecode which is embedded in the intro along with a small bytecode compiler that translates the bytecode into x86 code before execution. The algorithm for the 303 emulation is loosely based on the method used in JS303: https://github.com/thedjinn/js303 The final version includes a bit more variation in the visuals to give a greater sense of climax, plus some volume/panning adjustments in the music. Some words about the visuals from Ferris: About a month before the party I had the pleasure of attending a Jon Hopkins show in Oslo. It was absolutely excellent; obviously the audio was fantastic, but the visuals were great too, despite being little more than spotlights shining through copious amounts of fog. This really left an impression on me; I realized how effective such a simple setup could be, so I decided I'd keep that idea in my back pocket. Well, it didn't stay there long, as a few days later, Blueberry contacted me to see if I would want to do visuals for an intro for TRSAC to test his new 4k synth. Seeing as I didn't have anything else cooking at that point, and it would be my first IRL party in 3 years, I really didn't want to decline. As I was already thinking about it, I suggested that spotlights in fog might work well in 4k, but I wasn't sure it would carry a full intro. Fortunately, the plan was already to have Booster do an acid track to complement the acid compo at the party, where there would _also_ be spotlights in fog IRL, so this idea would actually be a perfect fit! Still, the party was only a few weeks away (which due to my current IRL situation is not nearly as much time as it used to be!), and I knew that implementing it the straightforward way would be too slow to get a decent number of lights on screen, so, while I did accept the offer, I was feeling a bit reluctant about it (and procrastinated a bit). However, experience has taught me that the most fun and satisfying thing to do generally, especially when things are uncertain, is to dig deeper and not just settle with the easiest thing to do, so a couple weeks before the party I finally bit the bullet and got started. OK, so point lights in fog are not difficult to render at all just using Riemann sums, and with importance sampling a relatively small number of samples per-light can be used. Still, I wanted to try and do something smarter so I could increase the number of lights (as <= 10 or so would have been too easy and likely not enough to build interesting forms from) and also hopefully reduce sampling artifacts, and it seemed like perhaps a purely analytical solution would be possible (at least for a relatively simple model, eg. single scattering with an isotropic phase function). I spent several days at the limits of my (admittedly crude) understanding of both participating media rendering and calculus; alas, I was unable to find a solution which was 1. entirely analytical, and 2. didn't crash the DX shader compiler. As the party drew closer and my patience dwindled, I started changing the approach, and ultimately found a set of compromises which I'm (relatively) happy with. The first major compromise is to use homogeneous fog (which can still look great if done correctly), and the second is assuming the product of the integrals of functions is the same as the integral of the products of those functions. Under these assumptions we can use an existing analytical solution ([0], and see [1] for a shadertoy implementation, cheers to greje656 for that!) for solving transmittance for each light as if it were a point light, and simply multiply that contribution by an integral representing the spotlight shape. The spotlight shape formulation I used was simply the dot product of the (normalized) light vector and the spotlight direction vector, raised to some power to make a wider/narrower cone. Assuming we only calculate the integral for this in front of the light (i.e. where the dot product is positive), this should be straightforward to compute. So, an analytical intersection between the light ray and a plane "behind" the light is performed, such that we can determine which segment of the ray to accumulate over (or if we can skip accumulation for this light altogether). Unfortunately, I was unable to find an analytical solution for the actual integral (though I'm not entirely convinced it's not possible; a different formulation and/or an approximation should be possible here, which may be worth exploring again, given more time and/or another use case), so I ended up just using a Riemann sum with a small (read: cheap/hacky) importance sampling approach. This introduces some noise which ends up looking quite pleasant in many cases, especially when the light is pointing towards the viewer, but it can also look not-so-great, especially when the light is perpendicular to the eye ray and the cone is narrow. I did experiment a bit with trading noise for undersampling artifacts, but ultimately decided I liked the noise better. Instead of just multiplying the integrals, I decided to leverage the fact that we calculate them separately by making the artistic choice to blend them, such that the point light solution is still faintly visible. This way we don't just get light contribution in the cones, but also in a somewhat-subtle "halo" in all directions, which I think looks quite nice. It also reduces noise a bit at the very tips of the cones, which can really help image stability in some cases. Overall, I think the hybrid analytical/sampling solution turned out OK in terms of the speed/size/noise tradeoff. I still feel somewhat unsatisfied with not finding a faster/less noisy solution, but it's certainly better than nothing, and even though I didn't quite get where I wanted to, I really enjoyed going for it and spending time trying to do something new and non-obvious (to me, at least). And, importantly, it's still much faster than just sampling for the entire integral, so it allowed enough lights to make interesting shapes which ultimately fit the music quite well. On that note, after getting some scenes together, it quickly became clear that I'm very much out of practice writing 4k shaders; the actual scene code ended up much larger than it could have been. I should have found a smarter/more parameterizeable approach for building scenes that would have reduced them to simpler parameter sets. Still, there was enough "air" in there to get it down (big thanks to Psycho for helping so much with reigning this in!) and along with a bit of byte hunting in the synth code/music data/main function (and also dropping rocket in lieu of some very basic if statements in the shader, as we didn't need that much sync) we eventually got it down enough for the compo. Special thanks to Blueberry for inviting me to participate in this project, to Booster for making a banger 4k track, Psycho for donating the framework and tips/bug hunting/optimization help (and all while being stuck at home with a broken elbow; you were missed at the party!), karo133ne for support (especially dealing with my frustration when things weren't working), name/direction suggestions, and of course buying me the Jon Hopkins tickets in the first place, the TRSAC orgas/compo team for yet again making a *fantastic* party and their patience getting our intro working on (one of) the compo machines, and, related, Gekko for donating a "spare soundcard" in the form of an enormous blinky box which fortunately had less line noise than the external sound card that that particular machine had previously! [0]: Vincent Pegoraro, Mathias Schott, and Steven G. Parker. 2009. An analytical approach to single scattering for anisotropic media and light distributions. In Proceedings of Graphics Interface 2009 (GI '09). Canadian Information Processing Society, CAN, 71â77. [1]: https://www.shadertoy.com/view/XtjfzD
[ back to the prod ]